17.7 C
London
Saturday, September 7, 2024
HomeBusinessNvidia reveals Blackwell AI chip GB200, set to launch this year.

Nvidia reveals Blackwell AI chip GB200, set to launch this year.

Date:

Related stories

Trenchless Repair and Plumbing: A Modern Solution for Homeowners

Trenchless repair and plumbing is revolutionizing the way homeowners...

Pro Pressure Works Moves to New Commercial Location in Dillsburg, PA

Pro Pressure Works, a leading name in the pressure...

10 Reasons Why Gutter Cleaning is Crucial for Your Home or Business

Are you questioning whether gutter cleaning is really necessary...
spot_img

Nvidia CEO Jensen Huang unveiled the Blackwell generation of AI graphics processors, starting with the GB200 chip. This new chip promises a significant performance boost, offering 20 petaflops in AI performance compared to the 4 petaflops of its predecessor, the Hopper H100. The Blackwell architecture is designed for AI companies seeking to train larger and more complex models, with a particular focus on transformer-based AI technologies like ChatGPT. Nvidia’s shift towards becoming a platform provider signals a strategic move to solidify its position in the market by offering powerful chips and revenue-generating software to streamline AI deployment.

The release of the Blackwell chip also marks Nvidia’s collaboration with tech giants like Amazon, Google, Microsoft, and Oracle, who will offer access to the GB200 through cloud services. This partnership with cloud service providers underscores the scalability and widespread adoption of Nvidia’s new chips, with Amazon Web Services planning to build a server cluster equipped with 20,000 GB200 chips. The deployment of such advanced hardware is expected to pave the way for AI breakthroughs, with the GB200 capable of running a 27-trillion-parameter model, significantly surpassing current AI models in complexity and capabilities.

Moreover, Nvidia introduced NIM (Nvidia Inference Microservice) as part of its enterprise software subscription, aiming to facilitate the utilization of older Nvidia GPUs for inference tasks. By streamlining the process of running AI software, NIM enables companies to leverage existing Nvidia GPUs efficiently. This software is part of Nvidia’s strategy to expand its customer base by appealing to businesses seeking to run their AI models on Nvidia-based servers or cloud services. The integration of NIM with AI models from partners like Microsoft and Hugging Face will enable developers to seamlessly deploy and run their AI projects on compatible Nvidia chips, enhancing the accessibility of AI technologies across various platforms, including GPU-equipped laptops and cloud servers.

Source link

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here