‘The next evolution of the Arm compute platform’: AGI CPU is Arm’s first in-house AI chip, Meta and OpenAI sign up as early clients — and it might have picked the perfect time to launch

SMCSgnc3vkhKp2d35whHGU 650 80.png

Industry looks beyond x86 for efficient large-scale AI data center deployment

Arm has extended its compute platform into production silicon for the first time with the introduction of what it calls the “next evolution of the Arm compute platform,” the AGI CPU.

The companys says the CPU is designed specifically for AI data centers, supporting agentic AI workloads which involve continuously running agents capable of reasoning, planning, and acting.

The processor features up to 136 Neoverse V3 cores per CPU, with 6GB/s memory bandwidth per core and sub-100ns latency, allowing higher workload density and improved system efficiency.

Performance and capacity

The Arm AGI CPU promises deterministic performance under sustained load with a 300-watt TDP and a dedicated core per program thread.

The processor supports air-cooled 1U server chassis with up to 8,160 cores per rack, and liquid-cooled deployments reaching 45,000 cores per rack.

Compared with x86 CPUs, the Arm AGI CPU can provide more than double the performance per rack, supporting larger AI workloads while remaining energy efficient.

These capabilities aim to improve compute density, accelerator utilization, and overall infrastructure efficiency.

Meta serves as the lead partner and co-developer of the Arm AGI CPU, integrating it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance.

Early commercial adoption also includes the likes of OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP, and SK Telecom.

Arm is collaborating with OEMs and ODMs such as Lenovo, Supermicro, Quanta Computer, and ASRock Rack to deliver early systems, with broader availability expected in the second half of 2026.

More than 50 industry leaders across hyperscale, cloud, semiconductor, memory, networking, software, and system design sectors support the CPU’s rollout.

“Over the last decade, we’ve partnered closely with Arm in building Graviton here at AWS, and it’s been a remarkable success — the majority of compute capacity AWS added to our fleet in 2025 was powered by Graviton,” said James Hamilton, SVP and Distinguished Engineer, Amazon.

“This collaboration has been great for both companies, and Graviton continues to deliver better price/performance for our customers.”

Industry partners also pointed to the broader infrastructure implications of the new CPU.

“The new Arm AGI CPU will further unlock the Arm ecosystem for a broad range of customers, creating new opportunities for everyone…” said Charlie Kawwas, President, Semiconductor Solutions Group, Broadcom Inc.

“As Broadcom builds the world’s most capable XPU and networking solutions for hyperscalers…our partnership with Arm has enabled us to move with unmatched intent and speed.”

The Arm AGI CPU is intended to serve as a foundation for agentic AI workloads, enabling organizations to deploy AI tools at scale while maintaining high efficiency.

The processor supports large-scale deployment of AI applications, including accelerator management, control plane processing, and cloud- or enterprise-based API and task hosting.

That said, the Arm AGI CPU’s success will depend on data center adoption, integration with existing accelerators and memory, and proven performance gains over alternatives.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x