Skip to main content

FormulaCode: Evaluating Agentic Optimization on Large Codebases

1The University of Texas at Austin, 2California Institute of Technology, 3Cornell University

Abstract

Large language model (LLM) coding agents increasingly operate at the repository level, motivating benchmarks that evaluate their ability to optimize entire codebases under realistic constraints. Existing code benchmarks largely rely on synthetic tasks, binary correctness signals, or single-objective evaluation, limiting their ability to assess holistic optimization behavior.

We introduce FormulaCode, a benchmark for evaluating agentic optimization on large, real-world codebases with fine-grained, multi-objective performance metrics.

FormulaCode is a live benchmark comprises 957 performance bottlenecks mined from scientific Python repositories on GitHub, each paired with expert-authored patches and, on average, 264.6 community-maintained performance workloads per task, enabling evaluation of the full optimization lifecycle—triage, diagnosis, and resolution—under realistic correctness and performance constraints. Our evaluations reveal that repository-scale, multi-objective optimization remains a major challenge for frontier LLM agents.

Benchmark Design

Each FormulaCode task evaluates the ability of an agent to optimize a real-world codebase under strict correctness constraints. A task begins with a baseline repository, which represents the unmodified implementation. The agent operates on the baseline and produces a modified version of the repository by making arbitrary repository-level edits.

Performance evaluation proceeds by executing the full set of workloads on both the baseline and the agent-modified code and comparing their measured outcomes. Improving performance on one workload may degrade performance on others. As a result, optimization in FormulaCode is inherently multi-objective: agents must reason about trade-offs across subsystems and deliver improvements that are broad and consistent rather than localized to a single execution path.

Dataset Construction

FormulaCode consists of multi-workload real-world code optimization problems from 70 repositories. We developed an automated four-stage pipeline that extracts these problems:

1. Repository Scraping

We crawl GitHub repositories with high-quality expert-defined performance workloads.

2. Attribute Filtering

We filter out candidate pull requests where the primary intent was not performance related, using rule-based and LLM-based filters.

3. Environment Synthesis

We synthesize environment building scripts using a reflexive LLM agent so that terminal interface tools function correctly.

4. Statistical Validation

We filter all candidate PRs that do not show statistically significant improvement in performance workloads.

Key Findings

Agents Improve Runtime but Underperform Experts

Agents generally can improve run-time performance, but perform worse than human experts.

Local vs. Global Optimization

Agents are better at local or function-level optimization, rather than repository-level optimization.

Optimization Strategy Strengths

Agents excel at using specific optimization strategies (e.g., parallelizing or batching) and struggle with others (e.g., vectorized operations).

Long-Tail Repository Performance

Agent performance relative to experts can vary dramatically by popularity of the repository, performing worst on the 4th quintile and best on the 2nd quintile.

Cost Efficiency

Despite being more expensive per call, agents using frontier LLMs are overall more cost effective than those using open weights models.

Multi-Workload Tradeoffs

Compared to human experts, agents make less favorable performance-cost trade-off decisions.

Compact Leaderboard

Agent Model RP Rank Adv Speedup
OpenHands Claude 4.0 Sonnet 1 -0.0112 1.0539x
OpenHands Qwen 3 Coder 2 -0.0301 1.0346x
OpenHands GPT-5 3 -0.0209 1.0825x
Terminus 2 Claude 4.0 Sonnet 4 -0.0410 1.0987x
Terminus 2 Qwen 3 Coder 5 -0.0454 1.0677x
Terminus 2 Gemini 2.5 Pro 6 -0.0433 1.0963x
Terminus 2 GPT-5 7 -0.0504 1.0585x

Don't see your model? Submit it!

To evaluate an agent on FormulaCode, follow the Installation instructions and run:

$ tb run -d formulacode -a [your-agent-name] -m [your-model-name]