At NVIDIA’s GTC 2021 Keynote event, the company unveiled its first datacenter-focused CPU. Its name is Grace, and it’s built for handling giant-scale AI and next-gen NLP models packing over a trillion parameters, among other things.
Don’t expect this CPU to reach consumer-grade machines anytime soon. For now, Grace is focused on much bigger clients, including the United States Department of Energy’s Los Alamos National Laboratory and Swiss National Supercomputing Centre (CSCS). These are the first two major entities that have announced they’ll be utilizing Grace-powered machines for their processing needs.
To explain why the U.S. DoE and CSCS are going with Grace, Nvidia’s blog sheds a bit of light on the CPU’s capabilities:
Grace is a highly specialized processor targeting workloads such as training next-generation NLP models that have more than 1 trillion parameters. When tightly coupled with NVIDIA GPUs, a Grace CPU-based system will deliver 10x faster performance than today’s state-of-the-art NVIDIA DGX™-based systems, which run on x86 CPUs.
Despite the CPU’s immense processing capabilities, NVIDIA acknowledges that Grace is built to serve a niche market. Most datacenters are anticipated to maintain their current CPUs as opposed to adopting Grace. Given that the processor is equipped to deliver “10x the performance of today’s fastest servers,” according to NVIDIA, it stands to reason that only a select group of organizations would need such technology immediately.
For individuals without a supercomputer handling language processing models in excess of one trillion parameters, NVIDIA’s consumer-grade components will remain the standard. NVIDIA offers hardware including some of the best graphics cards (including a favorite of ours, the RTX 3060 Ti) as well as useful software and features for professionals such as NVIDIA Broadcast.
We may earn a commission for purchases using our links. Learn more.