Close Menu
Ztoog
    What's Hot
    Gadgets

    This top-rated color sensor is under $60 this Memorial Day

    Crypto

    What’s The Next Move For Curve DAO Token?

    Crypto

    Sam Bankman-Fried found guilty on all seven counts

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » A decoder-only foundation model for time-series forecasting – Google Research Blog
    AI

    A decoder-only foundation model for time-series forecasting – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    A decoder-only foundation model for time-series forecasting – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Rajat Sen and Yichen Zhou, Google Research

    Time-series forecasting is ubiquitous in varied domains, similar to retail, finance, manufacturing, healthcare and pure sciences. In retail use circumstances, for instance, it has been noticed that enhancing demand forecasting accuracy can meaningfully cut back stock prices and enhance income. Deep studying (DL) fashions have emerged as a preferred strategy for forecasting wealthy, multivariate, time-series information as a result of they’ve confirmed to carry out properly in a wide range of settings (e.g., DL fashions dominated the M5 competitors leaderboard).

    At the identical time, there was fast progress in giant foundation language fashions used for pure language processing (NLP) duties, similar to translation, retrieval-augmented era, and code completion. These fashions are skilled on large quantities of textual information derived from a wide range of sources like frequent crawl and open-source code that enables them to establish patterns in languages. This makes them very highly effective zero-shot instruments; for occasion, when paired with retrieval, they’ll reply questions on and summarize present occasions.

    Despite DL-based forecasters largely outperforming conventional strategies and progress being made in decreasing coaching and inference prices, they face challenges: most DL architectures require lengthy and concerned coaching and validation cycles earlier than a buyer can take a look at the model on a brand new time-series. A foundation model for time-series forecasting, in distinction, can present respectable out-of-the-box forecasts on unseen time-series information with no extra coaching, enabling customers to deal with refining forecasts for the precise downstream activity like retail demand planning.

    To that finish, in “A decoder-only foundation model for time-series forecasting”, we introduce TimesFM, a single forecasting model pre-trained on a big time-series corpus of 100 billion actual world time-points. Compared to the most recent giant language fashions (LLMs), TimesFM is far smaller (200M parameters), but we present that even at such scales, its zero-shot efficiency on a wide range of unseen datasets of various domains and temporal granularities come near the state-of-the-art supervised approaches skilled explicitly on these datasets. Later this yr we plan to make this model obtainable for exterior prospects in Google Cloud Vertex AI.

    A decoder-only foundation model for time-series forecasting

    LLMs are normally skilled in a decoder-only vogue that entails three steps. First, textual content is damaged down into subwords referred to as tokens. Then, the tokens are fed into stacked causal transformer layers that produce an output corresponding to every enter token (it can’t attend to future tokens). Finally, the output similar to the i-th token summarizes all the knowledge from earlier tokens and predicts the (i+1)-th token. During inference, the LLM generates the output one token at a time. For instance, when prompted with “What is the capital of France?”, it would generate the token “The”, then situation on “What is the capital of France? The” to generate the subsequent token “capital” and so forth till it generates the whole reply: “The capital of France is Paris”.

    A foundation model for time-series forecasting ought to adapt to variable context (what we observe) and horizon (what we question the model to forecast) lengths, whereas having sufficient capability to encode all patterns from a big pretraining dataset. Similar to LLMs, we use stacked transformer layers (self-attention and feedforward layers) as the principle constructing blocks for the TimesFM model. In the context of time-series forecasting, we deal with a patch (a gaggle of contiguous time-points) as a token that was popularized by a current long-horizon forecasting work. The activity then is to forecast the (i+1)-th patch of time-points given the i-th output on the finish of the stacked transformer layers.

    However, there are a number of key variations from language fashions. Firstly, we’d like a multilayer perceptron block with residual connections to transform a patch of time-series right into a token that may be enter to the transformer layers together with positional encodings (PE). For that, we use a residual block much like our prior work in long-horizon forecasting. Secondly, on the different finish, an output token from the stacked transformer can be utilized to foretell an extended size of subsequent time-points than the enter patch size, i.e., the output patch size will be bigger than the enter patch size.

    Consider a time-series of size 512 time-points getting used to coach a TimesFM model with enter patch size 32 and output patch size 128. During coaching, the model is concurrently skilled to make use of the primary 32 time-points to forecast the subsequent 128 time-points, the primary 64 time-points to forecast time-points 65 to 192, the primary 96 time-points to forecast time-points 97 to 224 and so forth. During inference, suppose the model is given a brand new time-series of size 256 and tasked with forecasting the subsequent 256 time-points into the long run. The model will first generate the long run predictions for time-points 257 to 384, then situation on the preliminary 256 size enter plus the generated output to generate time-points 385 to 512. On the opposite hand, if in our model the output patch size was equal to the enter patch size of 32 then for the identical activity we must undergo eight era steps as a substitute of simply the 2 above. This will increase the possibilities of extra errors accumulating and due to this fact, in apply, we see {that a} longer output patch size yields higher efficiency for long-horizon forecasting

    TimesFM structure.

    Pretraining information

    Just like LLMs get higher with extra tokens, TimesFM requires a big quantity of reliable time sequence information to study and enhance. We have spent an excellent period of time creating and assessing our coaching datasets, and the next is what we’ve got discovered works finest:

    Synthetic information helps with the fundamentals. Meaningful artificial time-series information will be generated utilizing statistical fashions or bodily simulations. These primary temporal patterns can educate the model the grammar of time sequence forecasting.

    Real-world information provides real-world taste. We comb by means of obtainable public time sequence datasets, and selectively put collectively a big corpus of 100 billion time-points. Among these datasets there are Google Trends and Wikipedia Pageviews, which observe what individuals are serious about, and that properly mirrors traits and patterns in lots of different real-world time sequence. This helps TimesFM perceive the larger image and generalize higher when supplied with domain-specific contexts not seen throughout coaching.

    Zero-shot analysis outcomes

    We consider TimesFM zero-shot on information not seen throughout coaching utilizing fashionable time-series benchmarks. We observe that TimesFM performs higher than most statistical strategies like ARIMA, ETS and might match or outperform highly effective DL fashions like DeepAR, PatchTST which have been explicitly skilled on the goal time-series.

    We used the Monash Forecasting Archive to guage TimesFM’s out-of-the-box efficiency. This archive comprises tens of hundreds of time-series from varied domains like visitors, climate, and demand forecasting protecting frequencies starting from jiffy to yearly information. Following current literature, we examine the imply absolute error (MAE) appropriately scaled in order that it may be averaged throughout the datasets. We see that zero-shot (ZS) TimesFM is healthier than most supervised approaches, together with current deep studying fashions. We additionally evaluate TimesFM to GPT-3.5 for forecasting utilizing a selected prompting method proposed by llmtime(ZS). We reveal that TimesFM performs higher than llmtime(ZS) regardless of being orders of magnitude smaller.

    Scaled MAE (the decrease the higher) of TimesFM(ZS) in opposition to different supervised and zero-shot approaches on Monash datasets.

    Most of the Monash datasets are brief or medium horizon, i.e., the prediction size is just not too lengthy. We additionally take a look at TimesFM on fashionable benchmarks for lengthy horizon forecasting in opposition to a current state-of-the-art baseline PatchTST (and different long-horizon forecasting baselines). In the subsequent determine, we plot the MAE on ETT datasets for the duty of predicting 96 and 192 time-points into the long run. The metric has been calculated on the final take a look at window of every dataset (as accomplished by the llmtime paper). We see that TimesFM not solely surpasses the efficiency of llmtime(ZS) but additionally matches that of the supervised PatchTST model explicitly skilled on the respective datasets.

    Last window MAE (the decrease the higher) of TimesFM(ZS) in opposition to llmtime(ZS) and long-horizon forecasting baselines on ETT datasets.

    Conclusion

    We practice a decoder-only foundation model for time-series forecasting utilizing a big pretraining corpus of 100B actual world time-points, the vast majority of which was search curiosity time-series information derived from Google Trends and pageviews from Wikipedia. We present that even a comparatively small 200M parameter pretrained model that makes use of our TimesFM structure shows spectacular zero-shot efficiency on a wide range of public benchmarks from completely different domains and granularities.

    Acknowledgements

    This work is the results of a collaboration between a number of people throughout Google Research and Google Cloud, together with (in alphabetical order): Abhimanyu Das, Weihao Kong, Andrew Leach, Mike Lawrence, Alex Martin, Rajat Sen, Yang Yang and Yichen Zhou.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    AI

    “Periodic table of machine learning” could fuel AI discovery | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    The Flame-Throwing Robot Dog Can Be Purchased For $9,4k

    Throwflame, an Ohio-based flamethrower producer, has launched an revolutionary product within the yr 2024: the…

    Science

    NASA is developing a Mars helicopter that could land itself from orbit

    The Ingenuity Mars helicopter, the predecessor to ChopperNASA/JPL-Caltech NASA is engaged on plans to ship…

    Technology

    Key Solar Panel Ingredient Is Made in the U.S.A. Again

    A manufacturing unit in Moses Lake, Wash., that shut down in 2019 will quickly resume…

    The Future

    Zelle is shutting down its app, but you probably don’t need to worry

    Zelle is shutting down its standalone app on Tuesday, in accordance to an organization weblog…

    The Future

    Telstra acquires Boost in a deal rumored to be worth 140 million

    It was introduced yesterday morning that Telstra has acquired Boost Mobile. While the deal signifies…

    Our Picks
    Science

    Voyager 2 phones home and says everything is cool

    Mobile

    The Sony WF-1000XM5 are coming next week

    AI

    Multimodal medical AI – Google Research Blog

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    Gadgets

    Dealmaster: Deals from Apple and Sony ahead of Amazon’s big event

    Mobile

    Google Docs update adds a new visual update to voting

    The Future

    Enhance Your Website’s Traffic with Strategic Bot Traffic

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.