Firing a massive shot across the bows of Intel and AMD, Nvidia just dropped Vera at its GTC 2026 conference in San Jose. This new 88-core Arm-based data center CPU line signals a brutal pivot for the GPU giant. Instead of merely building companion chips for its AI accelerators, Nvidia is now coming straight for the mainstream server CPU crown.
Vera zeroes in on general-purpose data center workloads, though its true north remains heavily AI-centric. Expect it to muscle through agentic frameworks, scripting-heavy pipelines, data analytics, and grueling code compilation tasks.
Rethinking Core Architecture and Multi-Threading
By giving each hardware thread its own dedicated sandbox, processes can progress without fighting for datapath access. This fundamentally supercharges instruction-level parallelism and should make traditional multithreading look completely antiquated.
Feeding those 88 cores requires serious bandwidth, which arrives via a fresh generation of Nvidia's Scalable Coherency Fabric. Pushing an absurd 1.2 TB/s of aggregate memory bandwidth, the CPU supports up to 1.5 TB of SOCAMM LPDDR5 or LPDDR5X memory per chip. Under full load, that translates to a blistering 13.6 GB/s per core.
High-Density Racks and AI Infrastructure
Selling loose chips isn't really the endgame here, so the company also unveiled a gargantuan liquid-cooled rack aggregating 256 Vera CPUs, BlueField-4 DPUs, and ConnectX-class SuperNICs. This leviathan supports up to 400 TB of memory. It also exposes more than 22,500 CPU environments backed by over 22,000 Olympus cores in a single footprint.
If Nvidia's math holds up, this configuration delivers up to 6x higher CPU throughput and roughly 2x better performance in agentic AI workloads compared to legacy Intel or AMD server racks. Internal benchmarks show Vera outpacing the older Grace processor by 1.8x to 2.2x across scripting, compilation, and analytics tasks. Of course, independent testing will be required to see if these aggressive marketing numbers survive real-world deployments.
Platform Connectivity and Availability
Connectivity naturally includes industry prerequisites like PCIe 6.0 and CXL 3.1 support. However, the real star is a second-generation NVLink-C2C interface pushing up to 1.8 TB/s of die-to-die bandwidth. Doubling the speed of Grace's link, this proprietary interconnect completely outclasses standard PCIe 6.0 capabilities.
Security also gets a massive overhaul, with confidential computing finally bridging both CPU and GPU domains. Encrypted execution and isolation now extend straight into GPU memory and across multi-socket nodes. This fixes a glaring blind spot that plagued the older Grace architecture.
Production lines are already spinning, with Vera chips actively rolling off the fab floors today. Shipments to eager hardware partners are slated for the second half of 2026.
