Experience unmatched gaming performance.
Performance you trust, results you count on.
Your study partner for every challenge ahead.
Supercharge your 3D rendering, video editing, simulations, and scientific computations.
Our AI PC lineup combines cutting-edge hardware with AI-ready features, so you can run and build AI tools.
A full architecture overhaul. Xeon 600 brings Granite Rapids to the workstation for the first time — up to 86 cores, native FP16 AI acceleration, and DDR5 memory speeds that double the previous generation. This isn't an incremental refresh.
↑ 43% vs. prior gen (60)
↑ 67% vs. prior gen (4,800)
New — PyTorch/TF native
Xeon 600 replaces the Sapphire Rapids-based W-3500/W-2500 lineup with Granite Rapids cores on Intel 3 process, a new W890 chipset, and a new LGA4710 socket. Here's what changed.
First time the server-class Granite Rapids die has been brought to single-socket workstations. Intel 3 process enables higher core density with better power efficiency. Up to 9% single-thread and 61% multi-thread improvement vs. prior generation.
Previous Xeon workstation chips only supported BF16 and INT8 in AMX. Xeon 600 adds native FP16 — the default datatype in PyTorch and TensorFlow — eliminating type conversion overhead. Intel claims 17% faster AI workloads from this alone.
Previous gen topped out at 4,800 MT/s with standard RDIMMs. Xeon 600 adds support for Multiplexed Rank DIMMs (MRDIMM) at 8,000 MT/s — a 67% increase in memory bandwidth. Standard RDIMMs now run at 6,400 MT/s. Max capacity remains 4TB.
Not a drop-in upgrade — Xeon 600 requires new W890 motherboards on the LGA4710 socket. The platform also adds native CXL 2.0 support for memory expansion and USB4 connectivity. The previous W-3500/W-2500 line used W790 on LGA4677.
Xeon 600 uses exclusively Performance cores with Hyper-Threading. The flagship 698X delivers 86 cores and 172 threads at up to 4.8 GHz turbo — a 43% core count increase over the previous 60-core W9-3595X, without raising the 350W TDP ceiling.
The lineup spans 12 to 86 cores across 11 SKUs, with X-series models fully unlocked for overclocking. Intel and ASUS have already used Xeon 600 to set 10 overclocking world records.
Intel AMX on Xeon 600 now handles FP16 matrix operations natively, alongside BF16 and INT8. This matters because FP16 is the default precision for most PyTorch and TensorFlow inference workloads — previous Xeon chips required type conversion, which added overhead.
Real-world results: Topaz Labs Video Upscaler runs 29% faster, and SPECworkstation 4.0 AI benchmarks show a 17% improvement over the W9-3595X. Combined with multi-GPU PCIe 5.0 bandwidth, this platform handles both the CPU-bound and GPU-offloaded portions of modern AI pipelines.
Xeon 600 supports octa-channel DDR5 across all X-series SKUs. Standard RDIMMs now run at 6,400 MT/s — already a 33% increase over the previous 4,800 MT/s ceiling. But the real upgrade is MRDIMM support at 8,000 MT/s, which Threadripper doesn't yet offer.
This directly impacts memory-bound workloads like fluid dynamics simulation, large-dataset manipulation, and AI training data preprocessing. ECC remains standard across the lineup for data integrity during multi-day jobs.
The Xeon 600 platform delivers 128 lanes of PCIe 5.0, giving motherboard designers room for multiple x16 GPU slots alongside NVMe storage and high-speed networking — all at full Gen 5 bandwidth. Depending on your board, that can mean up to 7x PCIe 5.0 x16 physical slots.
For AI training, rendering, and simulation workloads that scale with GPU count, this I/O budget means you're not bottlenecked at the bus level when running multi-GPU configurations.
11 SKUs from 12 to 86 cores. All on LGA4710, all Q1 2026. X-series models are overclocking-unlocked.
Our solutions architects can help you match the right processor, memory configuration, and GPU setup to your workload.