AI Agents

ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development

JJie YangHHonglin GuoLLi JiJJiazheng ZhouRRui ZhengZZhikai LeiSShuo ZhangZZhiheng XiSShichun LiuYYuxin WangBBo WangYYining ZhengTTao GuiXXipeng Qiu
arXiv ID
2601.11077
Published
January 16, 2026
Authors
14
Hugging Face Likes
61
Comments
4

Abstract

The evolution of Large Language Models (LLMs) into autonomous agents has expanded the scope of AI coding from localized code generation to complex, repository-level, and execution-driven problem solving. However, current benchmarks predominantly evaluate code logic in static contexts, neglecting the dynamic, full-process requirements of real-world engineering, particularly in backend development which demands rigorous environment configuration and service deployment. To address this gap, we introduce ABC-Bench, a benchmark explicitly designed to evaluate agentic backend coding within a realistic, executable workflow. Using a scalable automated pipeline, we curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories. Distinct from previous evaluations, ABC-Bench require the agents to manage the entire development lifecycle from repository exploration to instantiating containerized services and pass the external end-to-end API tests. Our extensive evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks, highlighting a substantial disparity between current model capabilities and the demands of practical backend engineering. Our code is available at https://github.com/OpenMOSS/ABC-Bench.

Keywords

Large Language Modelsagentic backend codingexecutable workflowdevelopment lifecyclecontainerized servicesend-to-end API tests

More in AI Agents

View all
ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development | Paperchime