TemporalBench
Overview
TemporalBench is a multi-domain benchmark for evaluating the temporal understanding and reasoning capabilities of large language models (LLMs) and agent-based systems over real numerical time-series.
Unlike traditional benchmarks that focus primarily on forecasting accuracy, TemporalBench is designed to diagnose how models interpret temporal structure, ground temporal patterns in context, and reason about future behavior under explicit events. To this end, the benchmark decomposes temporal intelligence into four complementary task families (T1–T4), each targeting a distinct temporal competency.
The benchmark spans four real-world domains—retail, healthcare, energy, and physical systems—and supports both multiple-choice reasoning tasks and numerical forecasting objectives.
Task Design
TemporalBench organizes evaluation tasks into four task families:
T1 – Historical Time-Series Understanding
Interpretation of intrinsic temporal properties such as trends, volatility, seasonality, and anomalies.T2 – Context-Free Future Prediction
Prediction of future behavior based solely on historical temporal signals, using numerical forecasts and qualitative judgments.T3 – Contextual Temporal Reasoning
Reasoning over historical time-series grounded in domain-specific textual context.T4 – Event-Informed Prediction
Conditional and counterfactual reasoning about how future temporal behavior changes under explicitly specified events.
Each task family isolates a distinct temporal competency rather than forming an increasing-difficulty hierarchy.
Data Sources and Scope
TemporalBench is derived from existing real-world time-series datasets across four domains:
FreshRetailNet-50K (Retail):
https://huggingface.co/datasets/Dingdong-Inc/FreshRetailNet-50KMIMIC-IV (Healthcare):
https://physionet.org/content/mimiciv/3.1/PSML (Energy):
https://zenodo.org/records/5663995Causal Chambers (Physical Systems):
https://github.com/juangamella/causal-chamber
This dataset does not redistribute any raw data from the above sources.
Only derived task instances, annotations, prompts, and evaluation metadata are released.
In particular, no raw MIMIC-IV data, patient records, or identifiers are included.
Users must obtain access to the original datasets independently and comply with their respective licenses and data use agreements.
Annotations and Ground Truth
All ground truth labels are generated automatically using unified, rule-based procedures operating on historical and future time-series segments.
Key properties of the annotation process include:
- No manual annotation
- No model-in-the-loop labeling
- Ground truth computation independent of contextual descriptions and event narratives
- Explicit handling of uncertainty when signals are weak or ambiguous
Intended Use
TemporalBench is intended for:
- Benchmarking LLMs and agent frameworks on time-series understanding and reasoning
- Diagnostic evaluation of contextual and event-aware temporal reasoning
- Comparative analysis of agent designs beyond numerical forecasting accuracy
License
This dataset is released under the Apache License 2.0.
- Downloads last month
- 9