As one of the top players in the AI hardware space, Nvidia has announced the development of a new chip, the GH200, designed to power artificial intelligence models. Nvidia’s AI Chip matches up to move ahead of contemporaries like Google, AMD, and Amazon, which are intensifying their efforts to gain a foothold in the AI chip market. 

With over 80% market share, Nvidia currently dominates the AI chip market, largely due to its highly specialized graphics processing units (GPUs). These GPUs have become the preferred choice for running large AI models that drive generative AI software, such as Google’s Bard and OpenAI’s ChatGPT.

Nvidia’s Superchip GH200 

Nvidia has however been facing a challenge in meeting the growing demand for its chips. As tech giants, cloud providers, and startups seek to develop their own AI models, there has been an increasing shortage of GPU capacity. 

To address this issue, Nvidia has introduced the GH200 chip, which combines the power of its highest-end AI chip, the H100, with cutting-edge memory and an ARM central processor. This boost in processing capabilities is aimed at scaling up data centers and improving overall performance.

Nvidia AI SuperChip

CEO Jensen Huang unveiling Nvidia’s AI Superchip at SIGGRAPH 2023. (Image Courtesy – Nvidia Newsroom)

According to Nvidia CEO Jensen Huang, the GH200 chip is specifically designed for the scale-out of data centers. It offers a significant increase in memory capacity, with 141 gigabytes compared to the H100’s 80GB. This enhanced memory allows larger AI models to fit on a single system, eliminating the need for multiple GPUs or systems to run inference. Additionally, Nvidia has unveiled a system that combines two GH200 chips into a single computer, enabling even larger models to be processed more efficiently.

Nvidia’s AI Chip Cost Reduction

The GH200 chip focuses on the inference stage of AI model development. Inference, which involves making predictions or generating content using trained models, is computationally intensive and requires substantial processing power. By increasing memory capacity, Nvidia aims to lower the cost of running large language models during inference. This development is expected to have a significant impact on AI chip cost reduction, making AI technology more accessible and affordable for various industries.

Nvidia’s GH200 chip is set to be available from distributors in the second quarter of next year, with sampling opportunities anticipated by the end of this year. However, the company has yet to disclose the chip’s price. With the introduction of this new chip, Nvidia aims to maintain its dominant position in the AI hardware market and stay ahead of competitors like AMD, Google, and Amazon.

In a recent announcement, AMD introduced its own AI-oriented chip, the MI300X, which supports 192GB of memory and is heavily marketed for its AI inference capabilities. Meanwhile, companies like Google and Amazon are also developing custom AI chips to meet their specific needs in the inference stage of AI model development.

With the release of the GH200 chip, Nvidia is poised to maintain its position as the leader in AI hardware and continue shaping the future of AI.

Nvidia’s continuous innovation in the AI chip space demonstrates its commitment to pushing boundaries and staying at the forefront of AI technology. By addressing the demand for more powerful and efficient hardware, Nvidia aims to support the development of larger and more complex AI models, further expanding the possibilities of artificial intelligence across industries.Â