News

With Trainium Chip, AWS Lowers Cost Barrier to Machine Learning Training

A custom machine learning (ML) processor from Amazon Web Services (AWS) called Trainium is poised for release in 2021, the company revealed at its re:Invent conference this week.

Trainium is a cost-effective option for cloud-based ML model training, according to AWS CEO Andy Jassy, who introduced Trainium during his virtual re:Invent keynote. "We know that we want to keep pushing the price performance on machine learning training, so we're going to have to invest in our own chips," he said. "You have an unmatched array of instances in AWS, coupled with innovation in chips."

The Trainium chip was designed to provide the highest performance with the most teraflops (TFLOPS) of compute power for ML in the cloud, Jassy said, to enable a broader set of ML applications. The chip is specifically optimized for deep learning training workloads for applications, including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

Trainium is the second piece of custom, in-house silicon from AWS. Its predecessor, Inferentia, debuted two years ago. The company recently announced plans to move some Alexa and facial recognition computing to the Inferentia chips.

Inferentia enables up to 30 percent higher throughput and up to 45 percent lower cost-per-inference than Amazon EC2 G4 instances, which were already the lowest-cost instances for ML inference in the cloud.

"While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets," AWS states on the Trainium product site. "This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud."

The combination of Trainium and Inferentia provides an end-to-end flow of ML compute "from scaling training workloads to deploying accelerated inference," the company says.

The Trainium and Inferentia chips share the same AWS Neuron SDK, which makes it easy for developers already up to speed on Inferentia to get started with Trainium. Because the Neuron SDK is integrated with such popular ML frameworks as TensorFlow, PyTorch and MXNet, developers can readily migrate to AWS Trainium from GPU-based instances with minimal code changes.

AWS Trainium will be available via Amazon EC2 instances and AWS Deep Learning AMIs, as well as managed services including Amazon SageMaker, Amazon ECS, EKS and AWS Batch.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured

Subscribe on YouTube