Papers
arxiv:2512.15699

FrontierCS: Evolving Challenges for Evolving Intelligence

Published on Dec 17
· Submitted by
Wenhao Chai
on Dec 18
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

FrontierCS is a benchmark for evaluating models on open-ended computer science problems with unknown optimal solutions, where tasks involve implementing executable programs.

AI-generated summary

We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.

Community

Paper submitter

https://github.com/FrontierCS/Frontier-CS

Introducing FrontierCS. LiveCodeBench Pro is already a challenging competitive programming benchmark, so why do we still need to push one step further? The motivation behind FrontierCS is actually pretty simple: we love measuring intelligence with problems that have a "single", "correct", "optimal" answer, but what really matters at the frontier in practice is often open-ended problems where the optimum is unknown, yet every step can be objectively scored and verified. In our experiments, we kept running into a sobering pattern: simply scaling up reasoning compute doesn’t close the gap. Models often settle for a locally feasible "it runs" solution, then stall on algorithmic and system choices that are still clearly bad. We still have a long way to go. Let’s build Evolving Challenges for Evolving Intelligence!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.15699 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.15699 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.15699 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.