(Source: ECI/ABP News/ABP Majha)
Joining AI Race, Mark Zuckerberg Reveals Meta’s New 'Large Language Model Meta AI'
Meta's move comes as multiple big tech companies are throwing their hats into AI technology after the release of Microsoft-backed OpenAI's ChatGPT late last year
Meta CEO Mark Zuckerberg on Friday announced the launch of a new state-of-the-art AI large language model called ‘LLaMA’ designed to help researchers advance their work. The model called Large Language Model Meta AI (LLaMA), is intended to help scientists and engineers explore applications for AI such as answering questions and summarizing documents.
This comes as multiple big tech companies are throwing their hats into AI technology. After the release of Microsoft-backed OpenAI's ChatGPT late last year, tech giants like Alphabet Inc. and China's Baidu Ltd. are also promoting their own products.
In his post, Mark Zuckerberg said LLaMA technology could eventually solve math problems or conduct scientific research.
Mark wrote, “Today we're releasing a new state-of-the-art AI large language model called LLaMA designed to help researchers advance their work. LLMs have shown a lot of promise in generating text, having conversations, summarizing written material, and more complicated tasks like solving math theorems or predicting protein structures. Meta is committed to this open model of research and we'll make our new model available to the AI research community.”
In a blog post Meta said, LLaMA will be available under non-commercial license to researchers and entities affiliated with the government, civil society, and academia.
Also Read: Firm Claims ChatGPT Helped It Get Pending Dues Without Lawyer In Viral Tweet
Large language models mine vast amounts of text in order to summarize information and generate content. They can answer questions, for instance, with sentences that can read as though written by humans.
Meta says that its LLaMA is different in several ways from competitive models. For instance, it would be in many sizes available, ranging from 7 billion to 65 billion parameters. In recent years, larger models have been successful in increasing the technology's potential, but they are more expensive to run, a stage that researchers refer to as "inference." OpenAI’s GPT-3 has 175 billion parameters.