Meta Launches Llama 3.1. Open-Source AI Model That Surpasses GPT-4, Claude 3.5 In Some Benchmarks
Meta claims that Llama 3.1 is the most advanced open-source AI model to date.
Earlier this year, Meta hinted at a groundbreaking development in artificial intelligence (AI): an open-source model that claimed to rival the top proprietary models from leading AI companies. Now, it has fulfilled that with the launch of Llama 3.1, which Meta claims is the most advanced open-source AI model to date. According to the company, it surpasses GPT-4o and the new Claude 3.5 Sonnet in several performance metrics.
Meta Expands Availability Of AI Assistant, Adds Image Generation
Alongside this release, Facebook parent Meta is expanding the availability of its Llama-powered AI assistant to more regions and introducing new features, including the ability to generate images based on a user's appearance. Mark Zuckerberg, Meta CEO has predicted that their AI assistant will become the most widely used by the end of 2024, overtaking popular alternatives like OpenAI's ChatGPT.
Also read: OnePlus Nord 4 Vs Realme GT 6T: Which Mid-Ranger Should You Buy?
"Today we're taking the next steps towards open source AI becoming the industry standard. We're releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models," Zuckerberg wrote on facebook.
Llama 3.1 represents a significant leap forward from its predecessors released earlier this year. The most powerful version boasts 405 billion parameters and required over 16,000 high-end Nvidia H100 GPUs for training. While Meta has not disclosed the exact cost of development, industry experts estimate it to be in the hundreds of millions of dollars, based on hardware costs alone.
In a statement, Zuckerberg explained the company's rationale, drawing parallels to the success of open-source software like Linux. He argues that open-source AI models will eventually outpace proprietary alternatives in terms of development speed and overall performance, much like how Linux has become the dominant operating system across various devices and platforms.
Given the substantial investment, some may question Meta's decision to offer Llama 3.1 as an open-source model, with limited licensing restrictions for large-scale users.
In developing Llama 3.1 405B, Meta used an extensive dataset comprising 15 trillion tokens, with information current up to 2024. For context, tokens are smaller units of language that AI models process more efficiently than complete words. The scale of this dataset is immense, equivalent to approximately 750 billion words.
While the foundation of this dataset isn't entirely new -- it builds upon the basis used for previous Llama iterations -- Meta reports significant improvements in their data management strategies. The company states that they have enhanced their data curation methods and implemented more stringent quality control measures. These refinements were aimed at improving the overall quality and relevance of the training data used to create this latest model.