Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration?
Abstract
Code agents can autonomously generate more complex mathematical problems by evolving existing ones, providing a scalable solution for creating high-difficulty reasoning problems.
As large language models (LLMs) advance their mathematical capabilities toward the IMO level, the scarcity of challenging, high-quality problems for training and evaluation has become a significant bottleneck. Simultaneously, recent code agents have demonstrated sophisticated skills in agentic coding and reasoning, suggesting that code execution can serve as a scalable environment for mathematical experimentation. In this paper, we investigate the potential of code agents to autonomously evolve existing math problems into more complex variations. We introduce a multi-agent framework designed to perform problem evolution while validating the solvability and increased difficulty of the generated problems. Our experiments demonstrate that, given sufficient test-time exploration, code agents can synthesize new, solvable problems that are structurally distinct from and more challenging than the originals. This work provides empirical evidence that code-driven agents can serve as a viable mechanism for synthesizing high-difficulty mathematical reasoning problems within scalable computational environments. Our data is available at https://github.com/TarferSoul/Code2Math.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing Mathematical Problem Solving in LLMs through Execution-Driven Reasoning Augmentation (2026)
- Dr. Zero: Self-Evolving Search Agents without Training Data (2026)
- Scaling the Scaling Logic: Agentic Meta-Synthesis of Logic Reasoning (2026)
- Scaling Agentic Verifier for Competitive Coding (2026)
- Proof-RM: A Scalable and Generalizable Reward Model for Math Proof (2026)
- Programming over Thinking: Efficient and Robust Multi-Constraint Planning (2026)
- MAS-ProVe: Understanding the Process Verification of Multi-Agent Systems (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper