September 7, 2024
Nvidia's latest AI offering could spark a custom model gold rush

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia quietly unveiled its new AI Foundry service on Tuesday, aiming to help businesses create and deploy custom large language models tailored to their specific needs. The move signals Nvidia’s push to capture a larger share of the booming enterprise AI market.

The AI Foundry combines Nvidia’s hardware, software tools, and expertise to enable companies to develop customized versions of popular open-source models like Meta’s recently released Llama 3.1. This service arrives as businesses increasingly seek to harness the power of generative AI while maintaining control over their data and applications.

“This is really the moment we’ve been waiting for,” said Kari Briski, Nvidia’s VP of AI Software, in a call with VentureBeat. “Enterprises scrambled to learn about generative AI. But something else happened that was probably equally important: the availability of open models.”

Customization drives accuracy: How Nvidia’s AI Foundry boosts model performance

Nvidia’s new offering aims to simplify the complex process of adapting these open models for specific business use cases. The company claims significant improvements in model performance through customization. “We’ve seen almost a ten point increase in accuracy by simply customizing models,” Briski explained.

The AI Foundry service provides access to a vast array of pre-trained models, high-performance computing resources through Nvidia’s DGX Cloud, and NeMo toolkit for model customization and evaluation. Expert guidance from Nvidia’s AI specialists is also part of the package.

“We provide the infrastructure and the tools for other companies to develop and customize AI models,” Briski said. “Enterprises bring their data, we have DGX cloud that has capacity across many of our cloud partners.”

NIM: Nvidia’s unique approach to AI model deployment

Alongside the AI Foundry, Nvidia introduced NIM (Nvidia Inference Microservices), which packages customized models into containerized, API-accessible formats for easy deployment. This development represents a significant milestone for the company. “NIM is a model, a customized model and a container accessed by standard API,” Briski said. “This is the culmination of years of work and research that we’ve done.”

Industry analysts view this move as a strategic expansion of Nvidia’s AI offerings, potentially opening up new revenue streams beyond its core GPU business. The company is positioning itself as a full-stack AI solutions provider, not just a hardware manufacturer.

Enterprise AI adoption: Nvidia’s strategic bet on custom models

The timing of Nvidia’s announcement is particularly significant, happening the same day as Meta’s Llama 3.1 release and amid growing concerns about AI safety and governance. By offering a service that allows companies to create and control their own AI models, Nvidia may be tapping into a market of enterprises that want the benefits of advanced AI without the risks associated with using public, general-purpose models.

However, the long-term implications of widespread custom AI model deployment remain unclear. Potential challenges include fragmentation of AI capabilities across industries and the difficulty of maintaining consistent standards for AI safety and ethics.

As competition in the AI sector intensifies, Nvidia’s AI Foundry represents a significant bet on the future of enterprise AI adoption. The success of this gamble will largely depend on how effectively businesses can leverage these custom models to drive real-world value and innovation in their respective industries.



Source link