AWS Throws the Gauntlet at Intel, Launches High Performance Chips at re:Invent Conf

essidsolutions

Cloud computing giant AWS launched the third iteration of its Graviton chip this week, pitting it as a strong competitor to Intel and AMD’s core processors.

Amid the chip shortage crisis that grew worse during the pandemic and affected tech giants like Intel, Apple and Nvidia, Amazon’s cloud computing unit announced its entry into the highly-competitive chip manufacturing space. At the tenth AWS re:Invent conference in Las Vegas this week, AWS unveiled two new custom high-performance chips which, it says, will help organizations get the same performance at prices lower than those of Intel and Nvidia chips.

The chip shortfall, in general, is creating fresh opportunities for domestic chip manufacturing in the U.S., enabling tech giants like Amazon to foray into the space. Intel, which has slid to third place behind Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung Foundry, is also hoping to restore its leading position by capitalizing on a recent rise in demand and federal subsidies.

During its tenth re:Invent conference, AWS made a few surprise announcements. These included the unveiling of faster chips, advancements in AI capabilities, and more developer-friendly tools. Adam Selipsky, the new CEO of AWS who took over from Andy Jassy earlier this year, raised the curtain over the company’s new high-performance chips.

See More: Snapdragon Tech Summit 2021: Qualcomm Bets Big On Gaming And ARM-based Chips

“So today, I’m excited to announce the new Trn1 instance powered by Trainium, which we expect to deliver the best price-performance for training deep learning models in the cloud and the fastest on EC2.

“Trn1 is the first EC2 instance with up to 800 gigabytes per second bandwidth. So it’s absolutely great for large scale, multi node distributed training use cases,” Selipsky said. He said the chip supports use cases like image recognition, natural language processing, fraud detection and forecasting. They can also be used together in “ultra clusters” to support greater bandwidth and processing speeds.

“We can network these together and what we call Ultra clusters consisting of tens of thousands of training accelerators interconnected with petabyte scale networking. These Ultra clusters are powered by a powerful machine learning supercomputer for rapidly training the most complex, deep learning models with trillions of parameters,” Selipsky said, adding that AWS will work with partners like SAP to take advantage of the new capabilities.

See More: Global Chip Shortage: Can Cloud Migration Help Avert the Crisis?

Operating in an intensely competitive space, AWS is also rapidly upping the performance and lowering the cost of its hardware. The company’s new M6a machines, which use AMD’s third generation EPYC processors, are said to deliver a 35% price/performance improvement over the previous M5a machines, which used second generation EPYC chips. They’ll come in a variety of sizes, from two virtual CPUs with 8GB of RAM (m6a.large) to 192 virtual CPUs with 768GB of RAM (m6a.large & m6a.48xlarge).

The chips unveiled at AWS re:Invent 2021 will also include “always-on memory encryption” and will  use speedier proprietary circuitry for encryption and decoding, AWS said.

Let us know if you enjoyed reading this news on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!