Nvidia says its new AI tools are like a chip foundry for large language models

Jen-Hsun Huang found NeMo, and he’s swapped his clown suit for a business one.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

At Nvidia’s GTC keynote today, CEO Jen-Hsun Huang announced that the company will soon be rolling out a collection of large language model (LLM) frameworks, known asNvidia AI Foundations.

Jen-Hsun is so confident about the AI Foundations package, he’s calling it a “TSMC for custom, large language models.” Definitely not a comparison I was expecting to hear today, but I guess it fits alongside Huang’s wistful comments about AI having had it’s “iPhone moment.”

The Foundations package includes the Picasso and BioNeMo services that will serve the media and medical industries respectively, as well as NeMo: a framework aimed at businesses looking to integrate large language models into their workflows.

NeMo is “for building custom language, text-to-text generative models” that can inform what Nvidia calls “intelligent applications”.

With a little something called P-Tuning, companies will be able to train their own custom language models to create more apt branded content, compose emails with personalised writing styles, and summarise financial documents so us humans don’t have waste away staring at numbers all day—that sounds like a nightmare for me.

Hopefully it’ll take some weight off the everyman, and stop your boss shouting “BUNG IT IN THE CHATBOT THING,” because that’s supposedly faster.

Best CPU for gaming: The top chips from Intel and AMDBest gaming motherboard: The right boardsBest graphics card: Your perfect pixel-pusher awaitsBest SSD for gaming: Get into the game ahead of the rest

NeMo’s language models include 8 billion, 43 billion, 530 billion parameter versions, meaning there will be distinct tiers to pick from with vastly differing power levels.

The biggest gaming news, reviews and hardware deals

The biggest gaming news, reviews and hardware deals

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

For context, Chat GPT’s original GPT-3 subsisted on 175 billion parameters, and although OpenAI isn’t telling people how many parametersGPT-4is working with at the moment,AX Semanticsguesses around 1 trillion.

So, no it’s not quite going to be a direct ChatGPT competitor, and may not have the same depth of parameters, but as a framework for designing large language models its sure going to change the face of every industry it touches. That’s for certain.

Screw sports, Katie would rather watch Intel, AMD and Nvidia go at it. Having been obsessed with computers and graphics for three long decades, she took Game Art and Design up to Masters level at uni, and has been rambling about games, tech and science—rather sarcastically—for four years since. She can be found admiring technological advancements, scrambling for scintillating Raspberry Pi projects, preaching cybersecurity awareness, sighing over semiconductors, and gawping at the latest GPU upgrades. Right now she’s waiting patiently for her chance to upload her consciousness into the cloud.

Pud’s Dolbus gets you 5.1 surround sound from headphones with a side of deadly Saw trap nightmares

Five things I wish I’d known before buying my first gaming mouse

How to get the best ending to Dragon Age: The Veilguard—or the worst, if you’re curious