Case Study

Transforming Enterprise AI Research at Scale

Fujitsu Research partnered with SwarmOne to optimize their Azure-based AI research infrastructure - cutting costs by up to 80%, accelerating training by 3.6x, and eliminating infrastructure setup overhead entirely.

SwarmOne boosted personnel efficiency by about 90%, significantly reduced training costs, and enhanced delivery, making us far more competitive in our market.

Dr. Michael Erlihson
Dr. Michael Erlihson
AI Tech Lead, Salt Security

Company

Fujitsu Research

Industry

Enterprise Technology & Research

Original Infrastructure

Microsoft Azure GPUs (manual configuration)

The Challenge

Enterprise-Scale AI Research Bottlenecks

Fujitsu Research teams were conducting cutting-edge AI research involving large language models and complex batch inference workloads on Azure. Their manual infrastructure approach created persistent bottlenecks that slowed every experiment.

  • High Azure costs: training at $2.45/hour with significant waste from manual GPU provisioning and idle VMs between jobs
  • Slow research cycles: manual GPU configuration and batch size optimization delayed every new experiment
  • Infrastructure overhead: researchers spent 2–4 days configuring Azure resources for each new workload before training could begin
  • Idle resource waste: VMs remained running between jobs, burning budget unnecessarily
  • Limited optimization: manual tuning couldn't achieve optimal batch sizes, parallelism, or GPU utilization

“Every new research project meant days of infrastructure work before we could even start training. We were spending more time managing Azure than doing actual AI research.”

The Solution

Why Fujitsu Chose SwarmOne

Fujitsu Research needed infrastructure that could keep pace with their research velocity - something that would optimize automatically and eliminate setup time entirely.

  • Autonomous optimization: SwarmOne's engine automatically determined optimal batch sizes, GPU configurations, and resource allocation
  • Instant deployment: under 5 minutes from code to training - eliminating days of manual Azure setup
  • Zero idle waste: automatic resource provisioning and teardown meant paying only for active computation
  • Flexible performance tiers: choose between cost optimization (same A100s) or speed optimization (higher-tier GPUs)
  • Seamless Azure integration: no changes to existing workflows, codebases, or cloud accounts

“SwarmOne handles all the complexity of Azure GPU orchestration autonomously. Our researchers just write Python code and press go.”

The Impact

Results That Speak for Themselves

80%

Cost Reduction

3.6x

Speed Improvement

95%+

Setup Time Savings

Zero

GPU Waste

LLM Training: Cost-optimized tier delivered 51% lower cost and 1.26x faster training on the same A100s. Speed-optimized tier achieved 1.88x faster training with 22% lower total cost.

Batch Inference: 60% lower cost and up to 3.6x faster inference. Setup dropped from a full day of configuration to under 5 minutes.

“SwarmOne paid for itself in the first week. The combination of cost savings and speed improvements fundamentally changed our research velocity.”

- Fujitsu Research AI Team

Experience SwarmOne Today

Schedule a demo and see how SwarmOne can transform your AI infrastructure.