Modern supercomputers are some of the most powerful machines in the world, capable of processing vast amounts of data and performing complex calculations at incredible speeds. These machines are typically used for scientific research, data analysis, and other computationally intensive tasks that require massive amounts of computing power. Many modern supercomputers are built using clusters of hundreds or even thousands of individual processors, which are connected together to create a single, unified system. These machines can perform trillions of calculations per second and are used to tackle some of the most challenging problems in fields such as physics, astronomy, climate modeling, and more. With advances in technology continuing to push the boundaries of what’s possible, it’s clear that the age of supercomputing is only just beginning. Tech leaders, especially Google, has been at the forefront of AI for a long time now. Google’s Quantum Computing is cutting edge, and as per the company’s latest statement, its supercomputer is faster and more power-efficient than Nvidia’s machine.

Quantum computing

Google has designed a unique chip called the Tensor Processing Unit (TPU) specifically for training AI models. With over 90% of the company’s AI training work relying on these chips, it’s clear that they are an essential part of Google’s success in the field. Google has just released details of their fourth-generation TPU, demonstrating how they used their custom-developed optical switches to connect over 4,000 chips into a single supercomputer. This innovation allows Google’s supercomputer to reconfigure connections between chips in real-time, improving performance and avoiding issues.

The rise of large language models in AI has sparked intense competition among companies building supercomputers. Google’s PaLM model, its largest publicly disclosed language model to date, took over 50 days to train by spreading it across two supercomputers with 4,000 chips. Google’s flexible interconnects help to overcome this challenge by allowing the company to change the topology of their supercomputer interconnects on the fly.

Google’s supercomputer has already been put to use, with startup Midjourney using the system to train their model, which generates images from text inputs. In a recent paper, Google revealed that their supercomputer is 1.7 times faster and 1.9 times more energy-efficient than a system based on Nvidia‘s A100 chip for a system of the same size. Google did not compare their fourth-generation product to Nvidia’s current flagship H100 chip, but there are hints that Google is working on a new TPU to compete with Nvidia’s latest offering.

RELATED:

(Via)