news

Meta debuts newest Llama AI model with help from Nvidia and cloud partners

This photo taken on February 27, 2024 shows Mark Zuckerberg, head of US tech giant Meta, speaking to reporters at the Japanese prime minister’s office during his visit to Tokyo.
STR |ย JIJI Press | AFP | Getty Images
  • Meta on Tuesday announced the latest version of its Llama AI model, Llama 3.1.
  • The newest Llama technology comes in three different versions, with one variant being the biggest and most capable AI model from Meta to date. Like previous versions of Llama, the newest model continues to be open source, which means it can be accessed for free.
  • The announcement also highlights the growing partnership between Meta and Nvidia.

Meta on Tuesday announced the latest version of its Llama artificial intelligence model, dubbed Llama 3.1. The newest Llama technology comes in three different versions, with one variant being the biggest and most capable AI model from Meta to date. Like previous versions of Llama, the newest model continues to be open source, which means it can be accessed for free.

The new large language model, or LLM, underscores the social network's massive investment in keeping up in AI spending with the likes of highflying startups OpenAI and Anthropic and other tech giants like Google and Amazon.

The announcement also highlights the growing partnership between Meta and Nvidia. Nvidia is a key Meta partner, providing the Facebook parent with computing chips called GPUs to help train its AI models, including the latest version of Llama.

While companies like OpenAI aim to make money selling access to their proprietary LLMs or offering services to help clients use the technology, Meta has no plans to debut its own competing enterprise business, a Meta spokesperson said during a media briefing.

Instead, similar to when Meta released Llama 2 last summer, the company is partnering with a handful of tech companies that will offer their customers access to Llama 3.1 via their respective cloud computing platforms, as well as sell security and management tools that work with the new software. Some of Meta's 25 Llama-related corporate partners include Amazon Web Services, Google Cloud, Microsoft Azure, Databricks and Dell.

Although Meta CEO Mark Zuckerberg has told analysts during previous corporate earnings calls that the company generates some revenue from its corporate Llama partnerships, a Meta spokesperson said that any financial benefit is merely incremental. Instead, Meta believes that by investing in Llama and related AI technologies and making them available for free via open source, it can attract high-quality talent in a competitive market and lower its overall computing infrastructure costs, among other benefits.

Meta's launch of Llama 3.1 occurs in advance of a conference on advanced computer graphics in which Zuckerberg and Nvidia CEO Jensen Huang are scheduled to speak together.

The social networking giant is one of Nvidia's top-end customers that doesn't run its own business-facing cloud, and Meta needs the latest chips in order to train its AI models, which it uses internally for targeting and other products. For example, Meta said that the biggest version of the Llama 3.1 model announced on Tuesday was trained on 16,000 of Nvidia's H100 graphics processors.

But the relationship is also important to both companies for what it represents.

For Nvidia, the fact that Meta is training open source models that other companies can use and adapt for their businesses โ€” without paying a licensing fee or asking for permission โ€” could expand the usage of Nvidia's own chips and keep demand high.

But open source models can cost hundreds of millions or billions of dollars to create. There aren't many companies that are financially able to develop and release such models with similar amounts of investment. Google and OpenAI, although they are Nvidia customers, keep their most advanced models private.

Meta, on the other hand, needs a reliable supply of the latest GPUs to train increasingly powerful models. Like Nvidia, Meta is trying to foster an ecosystem of developers who are building AI apps with the company's open source software at the center, even if Meta has to essentially give away code and so-called AI weights that are expensive to build.

The open source approach benefits Meta by exposing developers to its internal tools and by inviting them to build on top of it, Ash Jhaveri, the company's VP of AI partnerships, told CNBC. It also helps Meta because it uses its AI models internally, thus enabling the company to reap improvements made by the open source community, he said.

Zuckerberg wrote in a blog post on Tuesday that it was taking a "different approach" to the Llama release this week, adding, "We're actively building partnerships so that more companies in the ecosystem can offer unique functionality to their customers as well."

Because Meta isn't an enterprise vendor, it can refer companies who inquire about Llama to one of its enterprise partners, like Nvidia, Jhaveri said.

The largest version of the Llama 3.1 family of models is called Llama 3.1 405B. This LLM contains 405 billion parameters, which refers to the variables dictating the overall size of the model and how much data it can process.

Generally speaking, a big LLM with a large amount of parameters can perform more complicated tasks than smaller LLMs, such as understanding context in long streams of text, solve complex math equations and even generate synthetic data that can presumably be used to improve smaller AI models.

Meta is also releasing smaller versions of Llama 3.1 that called Llama 3.1 8B and Llama 3.1 70B models. They are essentially upgraded versions of their predecessors and can be used to power chatbots and software coding assistants, the company said.

Meta also said the company's U.S.-based WhatsApp users and visitors of its Meta.AI website will be able to witness the capabilities of Llama 3.1 by interacting with the company's digital assistant. The digital assistant, which will run on the latest version of Llama, will be able to answer complicated math problems or solve software coding issues, a Meta spokesperson explained.

WhatsApp and Meta.AI users who are based in the U.S. will be able to toggle between the new, gigantic Llama 3.1 LLM or a less capable but faster and smaller version for answers to their queries, the Meta spokesperson said.

Watch: Cramer's Mad Dash: Meta

Copyright CNBC
Contact Us