Explorer

Meta Shares Details On Its AI Chips For The First Time, Plans To Make Its Tools Open For All

Meta remains committed to contributing to open-source technologies and AI research to advance the field.

Meta, the social networking giant formerly known as Facebook, has shared details on its internal silicon chip projects for the first time. The company showcased its custom computer chips designed to enhance artificial intelligence (AI) and video-processing capabilities during a recent virtual event discussing its AI technical infrastructure investments. The disclosure comes as Meta aims to improve efficiency through cost-cutting measures and layoffs. 

CEO Mark Zuckerberg on Thursday shared details of Meta's AI research labs, data centres, and training accelerators. Here's what he posted on his Facebook feed:

Meta's vice president of infrastructure, Alexis Bjorlin, stated that although developing custom chips is costly, the company believes the improved performance justifies the investment, as reported by CNBC. Meta has also been revamping its data centre designs to prioritise energy-efficient techniques like liquid cooling to reduce excess heat.

Among the new chips is the Meta Scalable Video Processor (MSVP), which processes and transmits videos to users while minimising energy consumption. According to Bjorlin, there were no commercially available options that could efficiently handle the task of processing and delivering four billion videos per day as Meta desired.

ALSO READ: WhatsApp Safety Tips: Meta Shares 6 Ways Users Can Protect Their Accounts

The other processor unveiled is the first in Meta's Meta Training and Inference Accelerator (MTIA) family of chips, designed to assist with various AI-specific tasks. The initial MTIA chip focuses on "inference," which involves predictions or actions made by a trained AI model.

The AI inference chip powers some of Meta's recommendation algorithms used to display content and ads in users' news feeds. Although Bjorlin did not disclose the chip's manufacturer, a blog post mentioned that it was fabricated using the TSMC 7nm process, indicating Taiwan Semiconductor Manufacturing as the producer.

Bjorlin mentioned that Meta has a "multi-generational roadmap" for its AI chip family, including processors for training AI models. However, details about these future chips were not provided. A previous report suggested that Meta had cancelled one AI inference chip project and initiated another planned for release around 2025, but Bjorlin declined to comment on the report.

Meta's focus on developing data centre chips is distinct from companies like Google and Microsoft, which offer cloud computing services. Consequently, Meta did not previously feel the need to publicly discuss its internal chip projects. However, the company's recent disclosure reflects the world's growing interest in its endeavours.

Meta's vice president of engineering, Aparna Ramani, emphasised that the new hardware was designed to work seamlessly with Meta's PyTorch software, a popular tool among third-party developers for creating AI applications.

The company's new chips will eventually power metaverse-related tasks, such as virtual and augmented reality, as well as generative AI applications that can produce engaging text, images, and videos.

Additionally, Meta unveiled a generative AI-powered coding assistant for its developers, similar to Microsoft's GitHub Copilot. The company also completed the final buildout of its Research SuperCluster, a supercomputer containing 16,000 Nvidia A100 GPUs, which was utilised to train Meta's LLaMA language model.

Meta remains committed to contributing to open-source technologies and AI research to advance the field. The company has already shared its LLaMA language model with researchers, allowing them to learn from the technology. However, the model was subsequently leaked to the public, leading to the development of numerous apps incorporating the LLaMA technology.

Ramani affirmed Meta's philosophy of open science and cross-collaboration, stating that the company is still considering its open-source collaborations. Meta's largest LLaMA language model, LLaMA 65B, contains 65 billion parameters and was trained on 1.4 trillion tokens, signifying the data used for AI training. While competing companies like OpenAI and Google have not publicly disclosed similar metrics for their large language models, recent reports indicate that Google's PaLM 2 model was trained on 3.6 trillion tokens and consists of 340 billion parameters.

View More
Advertisement
Advertisement
25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Advertisement

Top Headlines

'India-Guyana Ties Made Of Soil, Sweat': PM Modi At Guyanese Parliament, Says 'Democracy First, Humanity First'
'India-Guyana Ties Made Of Soil, Sweat': PM Modi At Guyanese Parliament, Says 'Democracy First, Humanity First'
Maharashtra: Two Latest Exit Polls Show Landslide Victory For Mahayuti, BJP To Emerge As Largest Party
Maharashtra: Two Latest Exit Polls Show Landslide Victory For Mahayuti, BJP As Largest Party
Pakistan: Over 50 Killed, 20 Injured As Militants Open Fire On Passenger Vehicles In Khyber Pakhtunkhwa
Pakistan: Over 50 Killed, 20 Injured As Militants Open Fire On Passenger Vehicles In Khyber Pakhtunkhwa
International Criminal Court Issues Arrest Warrants Against Israeli PM Netanyahu, Ex-Minister Gallant
Arrest Warrants Issued Against Israeli PM Benjamin Netanyahu, Ex-Minister Gallant
Advertisement
ABP Premium

Videos

India Emerges as G20's Growth Leader, Check Out the Latest GDP Rankings | ABP NewsAirtel-Nokia Partnership: Nokia’s Spectacular Comeback, Shakes Up the Telecom Sector | ABP NewsAdani Group in Turmoil: Bribery Scandal Rocks Shares, Plunge by 20% | ABP NewsPLI Scheme: Transforming India's Manufacturing Sector into a Global Powerhouse

Photo Gallery

Embed widget