Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

Meta unveils biggest Llama 3 AI model to rival OpenAI, Google

CGTN

Facebook employees take a photo with the company's name and logo outside the headquarters in Menlo Park, California, U.S. /CFP
Facebook employees take a photo with the company's name and logo outside the headquarters in Menlo Park, California, U.S. /CFP

Facebook employees take a photo with the company's name and logo outside the headquarters in Menlo Park, California, U.S. /CFP

Meta Platforms released the biggest version of its mostly free Llama 3 artificial intelligence (AI) models on Tuesday, boasting multilingual skills and general performance metrics that nip at the heels of paid models from rivals like OpenAI.

The new Llama 3 model can converse in eight languages, write higher-quality computer code and solve more complex math problems than previous versions, the Facebook parent company said in blog posts and a research paper announcing the release.

With 405 billion parameters, or variables that the algorithm takes into account to generate responses to user queries, it dwarfs the previous version released last year though is still smaller than leading models offered by competitors.

OpenAI's GPT-4 model, by contrast, is reported to have a trillion parameters and Amazon is preparing a model with 2 trillion parameters.

Meta CEO Mark Zuckerberg announced the development of Llama 4, the successor to their current AI model powering their chatbot used by "hundreds of millions." This technology, along with the previous Llama 3.1, will be made freely available under an "acceptable use policy," potentially allowing other companies to leverage it for their own AI development.  

Promoting Llama 3 across multiple channels, the CEO said he expected future Llama models would overtake proprietary competitors by next year. The Meta AI chatbot powered by those models was on track to become the most popular AI assistant by the end of 2024, with hundreds of millions of people using it already, he said.

CEO comments on U.S.-China AI development

In an interview with Bloomberg, Zuckerberg expressed concern that closing off the tech from other parts of the world would ultimately be a detriment. "There's one string of thought which is like, 'Ok, we need to lock it all down,'" he said. 

"I just happen to think that that's really wrong because the U.S. thrives on open and decentralized innovation. I mean that's the way our economy works, that's how we build awesome stuff. So, I think that locking everything down would hamstring us and make us more likely to not be the leaders," Zuckerberg said. 

It's also unrealistic to think that the U.S. will ever be years ahead of China when it comes to AI advancements, he added, but pointed out that even a small, multi-month lead can "compound" over time to give the U.S. a clear advantage, Bloomberg reported. 

"I think there is the question of what you can hope to achieve in the AI wars. If you're trying to say, 'Okay, should the U.S. try to be 5 or 10 years ahead of China?' I just don't know if that's a reasonable goal. So, I'm not sure if you can maintain that," said the CEO. 

"But what I do think is a reasonable goal is maintaining a perpetual, six-month to eight-month lead by making sure that the American companies and the American folks working on this continue producing the best AI system. And I think if the U.S. can maintain that advantage over time, that's just a very big advantage," he added.

Meta CEO Mark Zuckerberg looks on during the U.S. Senate Judiciary Committee hearing
Meta CEO Mark Zuckerberg looks on during the U.S. Senate Judiciary Committee hearing "Big Tech and the Online Child Sexual Exploitation Crisis" in Washington, D.C., U.S., January 31, 2024. /CFP

Meta CEO Mark Zuckerberg looks on during the U.S. Senate Judiciary Committee hearing "Big Tech and the Online Child Sexual Exploitation Crisis" in Washington, D.C., U.S., January 31, 2024. /CFP

Biggest Llama 3 AI model

Meta isn't just focusing on its powerhouse 405 billion parameter Llama model. The company is also releasing updated versions of its more lightweight 8 billion and 70 billion parameter Llama 3 models, initially introduced earlier this year.

All three new models boast multilingual capabilities and can handle more complex user requests thanks to an expanded "context window." According to Ahmad Al-Dahle, Meta's head of generative AI, this extended memory allows the models to process multi-step requests more effectively. User feedback, particularly concerning code generation, heavily influenced this improvement.

Al-Dahle revealed also that his team incorporated AI-generated data into the training process. This approach specifically improved the Llama 3 model's performance on tasks like solving math problems.

While measuring AI progress remains a challenge, test results provided by Meta suggest their flagship Llama 3 model performs competitively, even surpassing Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o in some cases. These two models are widely recognized as the most powerful large language models currently available.

For instance, on the MATH benchmark, which focuses on competition-level math word problems, Meta's model achieved a score of 73.8, compared to 76.6 for GPT-4o and 71.1 for Claude 3.5 Sonnet. Similarly, the Llama model scored 88.6 on the MMLU benchmark, encompassing various subjects across math, science, and humanities. Here, GPT-4o and Claude 3.5 Sonnet scored slightly higher with 88.7 and 88.3 respectively.

Search Trends