Amazon reveals more powerful and efficient chips to train AI

1488

Amazon Web Services (AWS) revealed its latest generation of AI chips intended for the formation of models and the execution of formed models. Trainium2, which is obviously intended for model training, is designed to deliver up to 4 times better performance and 2 times greater energy efficiency compared to its predecessor.

A Amazon promises that these chips will allow programmers to train models quickly and at a lower cost, due to reduced energy consumption. Anthropic, an Amazon-backed OpenAI competitor, has already announced plans to build models with Trainium2 chips.

Graviton4, on the other hand, is more for general use. These processors are based on the Arm architecture, but consume less power than Intel or AMD chips. Amazon promises a 30% increase in overall performance by using a trained AI model built into a Graviton4 processor.

BUT: Amazon will switch Android from its devices to its own system

This should reduce cloud computing costs for organizations that regularly employ AI models and offer a slight increase in speed for regular users just looking to take a few fake photos.

All in all, Graviton4 should allow AWS customers to “process large amounts of data, scale your workloads, improve time to results and reduce your total cost of ownership".

Typically, when a company announces new in-house chips, it spells trouble for current third-party vendors like NVIDIA. The company is a big player in the enterprise AI space, thanks to companies using its GPUs for training and its Arm-based data center Grace CPU.

Instead of eschewing the partnership in favor of proprietary chips, Amazon is further cementing the relationship by offering enterprise customers cloud access to NVIDIA's latest H200 AI GPUs. It will also operate more than 16.000 Nvidia GH200 Grace Hopper Superchips expressly for NVIDIA's research and development team.

LEAVE AN ANSWER

Please enter your comment!
Please enter your name here