Tue, 21-Oct-2025

Google Ads | Google Ads | Google Ads | Google Ads | Google Ads | Google Ads | Google Ads | Google Ads

Nvidia unveils updated AI chip design for faster generative AI tasks

Nvidia unveils updated AI chip design for faster generative AI tasks

Nvidia unveils updated AI chip design for faster generative AI tasks

  • Nvidia enhances AI chips for faster generative tasks.
  • Nvidia combines a central processor and H100 GPU to enhance AI processing power.
  • Optimized for efficient AI inference, and improved performance.

Nvidia introduced an updated design for its advanced AI chips to accelerate generative AI tasks.

The improved Grace Hopper Superchip version enhances high-bandwidth memory, supporting larger AI models for applications like ChatGPT.

Nvidia’s VP, Ian Buck, stated the configuration is optimized for efficient AI inference, vital for generative AI functions.

The design combines an Nvidia-developed central processor with an H100 GPU to boost AI processing power. This adaptation addresses the demands of advanced generative AI tasks.

“Having larger memory allows, allows the model to remain resident on a single GPU and have to require multiple systems or multiple GPUs to in order to run,” Buck explained during a conference call with reporters.

[embedpost slug=”/intel-launches-arc-a770-gpu-to-take-on-nvidia-rtx-3060/”]

As generative AI applications producing human-like text and images evolve, the AI models supporting them are expanding in size.

These larger models demand more memory to function effectively, preventing the need for separate chips and systems, thus maintaining optimal performance.

“The additional memory, just simply increases the performance of the GPU,” Buck stated.

Nvidia’s upcoming GH200 configuration will launch in the second quarter of next year, according to Buck.

The company intends to offer two options: one with two chips for customer integration into systems, and another complete server system combining two Grace Hopper designs.