Computex 2024: NVIDIA hints at next-gen AI platform Rubin for 2026, RTX GPU-focussed Windows Copilot+ AI PCs

Mehul Reuben Das June 3, 2024, 05:16:32 IST

In his keynote address before Computex 2024, NVIDIA CEO Jensen Huang laid down NVIDIA’s roadmap. After dominating the data centre space with their AI GPUs, NVIDIA is now looking to bring AI to personal computing with their RTX AI PCs

Advertisement
NVIDIA CEO Jensen Huang, at NVIDIA's Computex 2024 Keynote. Image Credit: NVIDIA
NVIDIA CEO Jensen Huang, at NVIDIA's Computex 2024 Keynote. Image Credit: NVIDIA

NVIDIA, under the leadership of CEO Jensen Huang, has unveiled its latest strategic move. The tech giant has committed to upgrading their AI accelerators annually, in order to maintain their leading edge in an ever-evolving landscape of AI.

Huang at a keynote event ahead of Copmutex 2024 gave the tech community a brief preview of the company’s roadmap, which sees the imminent release of the Blackwell Ultra chip in 2025. Huang also dropped hints about the Rubin platform slated for 2026.

Advertisement

The excitement surrounding NVIDIA announcement coincides with its strong position in AI-powered data centre solutions. Huang also emphasized the transformative potential of generative AI. He even went on to draw parallels to historical industrial revolutions, highlighting Nvidia’s pivotal role in spearheading this new wave, particularly as AI becomes increasingly pervasive in personal computing environments.

Beyond data centres
NVIDIA is rolling out a fresh design for server computers powered by its chips, with the MGX program serving as a conduit for giants like Hewlett Packard Enterprise Co. and Dell Technologies Inc. to swiftly bring products to market. These products cater to the needs of corporations and government agencies, with even rivals like Advanced Micro Devices Inc. and Intel Corp. leveraging the design to integrate their processors alongside Nvidia chips.

In addition to the MGX program, earlier-revealed offerings such as Spectrum X for networking and NVIDIA Inference Microservices (NIM), touted by Huang as “AI in a box,” are now readily available and gaining traction across the board. Notably, NVIDIA is making NIM products accessible for free, providing companies with a set of intermediary software and models to expedite the deployment of AI services without grappling with the underlying technology. However, companies deploying these services will incur usage fees payable to Nvidia.

Huang also championed the use of digital twins within NVIDIA’s virtual realm, dubbed the Omniverse. Demonstrating the immense scale achievable, he showcased a digital twin of Earth, dubbed Earth 2, and elucidated its utility in conducting sophisticated weather pattern modelling and other intricate tasks. Taiwanese contract manufacturers like Hon Hai Precision Industry Co., commonly known as Foxconn, are leveraging these tools to enhance planning and operational efficiency in their factories.

Advertisement

Going into robotics, healthcare and other avenues
NVIDIA’s meteoric rise has been fueled by a surge in AI-related investments, catapulting the company to the top as the world’s most valuable chipmaker. However, NVIDIA built it self on the PC industry, on personal computing, particularly gamers. Seeing this, Huang realises the need to expand beyond its new customer base.

NVIDIA is now aiming for broader horizons. Huang envisions a future where AI technologies permeate diverse sectors, from manufacturing to healthcare, warning that those slow to adopt risk falling behind in an increasingly competitive landscape.

At the heart of NVIDIA vision lies the concept of “computation inflation,” eloquently articulated by Huang. With data volumes skyrocketing at an exponential rate, traditional computing methods are struggling to keep pace.

Advertisement

Enter NVIDIA’s accelerated computing approach, promising not only superior performance but also significant cost and energy savings compared to conventional methods.

And then there’s Rubin, the eagerly anticipated AI platform poised to push technological boundaries. Powered by HBM4, the next evolution of high-bandwidth memory, Rubin holds the promise of overcoming existing bottlenecks in AI accelerator production.

NVIDIA’s journey from gaming hardware to AI powerhouse has been nothing short of remarkable. NVIDIA finds itself at the intersection of this convergence, perfectly positioned to capitalise on the synergy. At Computex, strategic partnerships with industry giants like Microsoft have resulted in Copilot+ branded laptops, boasting AI-enhanced performance. While these devices rely on Qualcomm processors, NVIDIA’s graphics cards provide a significant performance boost, particularly for gaming enthusiasts.

Advertisement

Yet NVIDIA’s ambitions extend beyond hardware, they are also making a commitment to empower software developers. By providing a suite of tools and pre-trained AI models, Nvidia plans on democratising AI innovation.

Whether it’s optimizing battery life or unlocking new dimensions in gaming, NVIDIA’s resources should enable developers to unleash the full potential of AI across a myriad of applications.

Focusing again on PCs
NVIDIA teased the imminent arrival of “RTX AI PC” laptops from Asus and MSI, slated to incorporate Copilot Plus PC features. These laptops will feature up to GeForce RTX 4070 GPUs and power-efficient systems-on-a-chip with Windows 11 AI PC capabilities. Nvidia revealed that these laptops will come equipped with AMD’s latest Strix CPUs, although AMD has yet to officially unveil these CPUs.

Advertisement
NVIDIA is now working on bringing its AI computing powers to personal computing, namely laptops and desktops. Image Credit: NVIDIA

NVIDIA’s push for relevance in AI-powered tasks on laptops is met with Microsoft’s advancement in offloading AI models to NPUs. Nvidia is doubling down on its “RTX AI laptops” branding, asserting its GPUs’ superior capability in handling heavier AI workloads compared to NPUs. The company is set to launch an RTX AI Toolkit in June, offering tools and SDKs for model customization, optimization, and deployment, aiming to enhance performance while reducing VRAM requirements.

Collaborating with Microsoft, NVIDIA is contributing to the development of AI models integrated into Windows 11. This collaboration aims to provide application developers with easy API access to GPU-accelerated small language models (SLMs), enabling retrieval-augmented generation (RAG) capabilities powered by Windows Copilot Runtime.

With NPUs currently boasting around 40 TOPS of performance while NVIDIA’s PC GPUs can handle over 1,000 TOPS for AI acceleration, developers face a critical decision regarding performance and power efficiency. While NPUs excel in smaller models and power efficiency, GPUs offer robust performance for larger models in PC desktops where battery life isn’t a concern.

Latest News
Find us on YouTube
Subscribe

Top Shows

Home Video Shorts News