Explorer

Google Claims Its Supercomputer Is Faster, More Power-Efficient Than Nvidia Systems

Google's supercomputers make it easy to reconfigure connections between chips on the fly, helping to avoid problems and tweak for performance gains.

On Tuesday, Alphabet Inc's Google provided new information about its supercomputers that are used to train its artificial intelligence models. Google claims that its systems are both faster and more power-efficient compared to comparable systems from Nvidia Corp.

Google uses its custom-designed Tensor Processing Unit (TPU) chips for over 90 per cent of its artificial intelligence training. The Google TPU is now in its fourth generation, and the company has published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its custom-developed optical switches to connect individual machines.

Improving connections has become a crucial point of competition among companies that build AI supercomputers because the large language models that power technologies like Google's Bard or OpenAI's ChatGPT have exploded in size. The models are too large to store on a single chip, and they must be split across thousands of chips that must work together for weeks or more to train the model.

ALSO READ: Quantum Computing Touted To Be At Centre Of India's 'Techade'. Is It Truly The Future?

Google's supercomputers make it easy to reconfigure connections between chips on the fly, helping to avoid problems and tweak for performance gains. In a blog post about the system, Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote that "circuit switching makes it easy to route around failed components," and "this flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model."

Google has been using its supercomputer inside the company since 2020 in a data centre in Mayes County, Oklahoma. According to Google, Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.

Google said that its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia's A100 chip that was on the market at the same time as the fourth-generation TPU. However, Google did not compare its fourth-generation chip to Nvidia's current flagship H100 chip because the H100 came to the market after Google's chip and is made with newer technology.

Google hinted that it might be working on a new TPU that would compete with the Nvidia H100, but provided no details. Jouppi told Reuters that Google has "a healthy pipeline of future chips."

Top Headlines

Want To Look Like Ranveer Singh In Dhurandhar 2? Try These AI Prompts
Want To Look Like Ranveer Singh In Dhurandhar 2? Try These AI Prompts
Do You Know These 5 Hidden WhatsApp Tricks? Most Users Don’t
Do You Know These 5 Hidden WhatsApp Tricks? Most Users Don’t
iPhone 17 Pro Max Now Rs 30,000 Off: Here’s Exactly How To Get This Deal
iPhone 17 Pro Max Now Rs 30,000 Off: Here’s Exactly How To Get This Deal
AI Goes Local, Women Go Global: The Quiet Revolution In Rural India
AI Goes Local, Women Go Global: The Quiet Revolution in Rural India

Videos

China Israel Tension: China Issues Sharp Condemnation of Israeli Strikes on Iranian Leadership
War update: Base hosts nearly 2,000 US troops and advanced fighter jets
Strike Alert: Iran launches massive counterattacks across multiple regions
Political Row: BJP Releases First List of 88 Candidates, CM Himanta to Contest from Jalukbari
Political Alert: Investigation did not strongly place Anand Singh at crime scene

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget