The Traditional Forecasting Landscape
For decades, time series forecasting relied on statistical methods like ARIMA, exponential smoothing, and Prophet. These approaches worked well for structured data with clear seasonal patterns, but they struggled with:
- Complex, multi-variate relationships: Traditional models often handle one or a few variables well but scale poorly
- Unstructured context: Email threads, news events, and sentiment data were difficult to incorporate
- Rare events: Black swan events and anomalies were hard to predict without explicit modeling
Enter LLMs: A Paradigm Shift
Large Language Models bring something new to forecasting: the ability to understand and reason over vast amounts of unstructured context. Here’s how they’re being applied:
1. Zero-Shot Forecasting
Recent research shows that LLMs can perform forecasting without any fine-tuning. By prompting models with historical data:
prompt = f"""
Given the following quarterly revenue data for a tech company:
Q1 2023: $1.2M
Q2 2023: $1.4M
Q3 2023: $1.6M
Q4 2023: $1.8M
Q1 2024: $2.0M
Q2 2024: $2.2M
Q3 2024: $2.5M
Predict Q4 2024 revenue. Consider trend, seasonality, and growth patterns.
"""
The model can reason about patterns and provide predictions, often matching or exceeding traditional methods.
2. Hybrid Approaches
More powerful than pure zero-shot are hybrid systems that combine:
- LLMs for reasoning: Understanding context, news, and qualitative factors
- Traditional models for numerical patterns: Handling trend extraction and seasonality
# Conceptual hybrid approach
def hybrid_forecast(data, context):
# Use traditional model for baseline prediction
baseline = arima_model.predict(data)
# Use LLM to adjust based on contextual factors
adjustment_factor = llm_reasoning(context, baseline)
return baseline * adjustment_factor
3. Time Series as Text
One fascinating approach treats time series as a “language” that LLMs can understand:
- Tokenize numerical sequences into “words”
- Train or fine-tune models on temporal patterns
- Generate forecasts by predicting next “tokens”
This bridges the gap between NLP and time series analysis.
Practical Applications
Financial Forecasting
Banks and hedge funds are using LLMs to:
- Analyze earnings calls and news for sentiment
- Predict market movements based on macro trends
- Assess risk from textual regulatory filings
Supply Chain & Demand Forecasting
Companies like Walmart and Amazon use LLMs to:
- Incorporate promotional calendars and events
- Factor in weather patterns and local news
- Reason about geopolitical factors
Energy & Utilities
Grid operators leverage LLMs to:
- Predict demand based on weather forecasts
- Factor in economic indicators
- Plan for renewable energy integration
Challenges and Limitations
LLMs aren’t perfect forecasters. Key challenges include:
1. Numerical Precision
LLMs can struggle with exact numerical calculations. They excel at reasoning but may produce imprecise outputs.
Solution: Use LLMs for direction and context, then apply numerical refinement.
2. Hallucination Risk
Models may “see” patterns that don’t exist or fabricate historical data.
Solution: Always validate against ground truth and use retrieval-augmented generation (RAG).
3. Computational Cost
Running LLMs for forecasting is expensive compared to traditional methods.
Solution: Use smaller, fine-tuned models for specific forecasting tasks.
The Future: What to Expect
We’re moving toward a world where:
- Specialized forecasting models emerge, fine-tuned on financial, energy, or retail data
- Multimodal forecasting combines text, images, and numerical data
- Agentic systems can gather data, reason about it, and make decisions autonomously
- Human-LLM collaboration becomes the norm, with humans providing domain expertise
Getting Started
If you want to experiment with LLM-based forecasting:
- Start simple: Try zero-shot forecasting with GPT-4 or Claude on your data
- Build hybrids: Combine LLM insights with traditional models
- Fine-tune: For specific domains, fine-tuning often outperforms prompting
- Evaluate rigorously: Compare against baselines and measure improvement
Conclusion
LLMs represent a fundamental shift in how we approach forecasting. While traditional methods aren’t obsolete—they remain valuable for many use cases—LLMs enable us to incorporate context, reason about uncertainty, and handle complexity that was previously intractable.
The key is understanding when to use each approach and, increasingly, how to combine them effectively.
What’s your experience with LLM-based forecasting? Let’s connect on LinkedIn.