About ASB
What AgenticSwarmBench Is
AgenticSwarmBench (ASB) is an open-source inference performance benchmark purpose-built for agentic swarm workloads - the kind of LLM request patterns that Claude Code, Cursor, Windsurf, and Copilot generate in practice.
It measures how fast your serving stack runs under growing multi-turn contexts (6K to 400K tokens), with tool schemas, file contents, error traces, and concurrent agents. No existing benchmark tests these specific access patterns.
ASB produces a clear verdict - š¢ GOOD, š” MARGINAL, or š“ POOR - answering one question: "Is this endpoint good enough for agentic swarm?"
Built by SwarmOne
ASB is created and maintained by SwarmOne - the AI-native cloud for agentic workloads. SwarmOne provides optimized infrastructure for running agentic swarms at scale, and ASB was born from the need to rigorously benchmark that infrastructure.
Project Architecture
agentic-swarm-bench/ āāā asb/ ā āāā cli/ # CLI entry points (speed, eval, agent, record, replay) ā āāā core/ # Benchmark engine, request generation, metrics ā āāā tasks/ # 110 agentic tasks (P1-P110) ā āāā context/ # Context profile builder & cache control ā āāā reporting/ # Report generation, verdicts, comparison ā āāā proxy/ # Recording proxy for asb agent/record āāā tests/ # Test suite āāā docker/ # Dockerfile and docker-compose āāā docs/ # Documentation source āāā examples/ # Example configs and workloads
Key Features
- 110 agentic tasks across 6 difficulty tiers (trivial ā expert + multi-language)
- 7 context profiles simulating real session growth (6K ā 400K tokens)
- 5 CLI modes: speed, eval, agent, record, replay
- Cold vs warm cache measurement for prefix caching evaluation
- Concurrent user simulation (1, 8, 32+ users)
- Automated verdict system with per-metric grading
- Docker support for reproducible benchmarking
- JSON output for CI/CD integration
License
AgenticSwarmBench is open source under the Apache 2.0 License. Free to use, modify, and distribute.
How to Cite
If you use ASB in research or publications, please cite:
@software{agenticswarmbench2026,
title = {AgenticSwarmBench},
author = {SwarmOne},
url = {https://github.com/SwarmOne/agentic-swarm-bench},
year = {2026},
note = {Open-source benchmark for LLM inference under agentic swarm workloads}
}