Amazon has raised the stakes in the competitive field of artificial intelligence (AI) by introducing Trainium2, a cutting-edge AI chip for its cloud computing service. The unveiling took place at a conference in Las Vegas, where Amazon Web Services (AWS) CEO Adam Selipsky shared that Trainium 2, focused on training AI systems, outpaces its predecessor in speed, boasting a fourfold increase, and is twice as energy-efficient. This move comes amidst a burgeoning rivalry with Microsoft, which recently announced an AI chip named Maia. The upcoming Trainium2 will vie for market dominance not only against Microsoft but also against Alphabet’s Google, a key player that has been providing its Tensor Processing Unit (TPU) to cloud computing customers since 2018.

The surge in the development of custom chips is spurred by the growing demand for computational power, particularly in the creation of advanced technologies like large language models, exemplified by services such as ChatGPT. Both AWS and Microsoft are positioning their custom chips as alternatives to Nvidia, the leading AI chip provider facing supply shortages. In addition to Trainium 2, AWS disclosed Graviton 4, its fourth custom central processor chip, asserting a 30% speed boost compared to its predecessor. Interestingly, AWS and Microsoft are diverging from traditional chip providers like Intel and Advanced Micro Devices (AMD), instead opting for technology from Arm Ltd. Another player in the cloud service arena, Oracle, has chosen to leverage startup Ampere Computing’s chips. The unfolding AI chip race underscores the industry’s relentless pursuit of faster and more energy-efficient solutions, marking a critical juncture in technological advancement.