AWS Throws Its Hat in Generative AI Ring with Hugging Face Deal
With generative AI products like ChatGPT dominating tech discourse and making waves among consumers and enterprise IT pros alike, cloud leader Amazon Web Services (AWS) is throwing its own hat in the ring.
The company this week announced a partnership with Hugging Face, an open source platform for machine learning models and tools. Through the partnership, users of both Hugging Face and AWS can easily integrate ML models into their apps, while taking advantage of the cost savings, scalabilty and other advantages of the cloud. The goal, according to a Hugging Face blog post on Tuesday, is to "democratize machine learning."
"Through the strategic partnership, Hugging Face will leverage AWS as a preferred cloud provider so developers in Hugging Face's community can access AWS's state-of-the-art tools (e.g., Amazon SageMaker, AWS Trainium, AWS Inferentia) to train, fine-tune, and deploy models on AWS," the blog said. "This will allow developers to further optimize the performance of their models for their specific use cases while lowering costs."
The partnership specifically focuses on generative AI, which refers to AI technologies that can create "original" content, including text, music or images. Popular examples of generative AI include OpenAI's DALL-E and ChatGPT, both of which AWS rival Microsoft has recently integrated into key pieces of its stack, most notably the Bing search engine. Google, another key player in the cloud market, has also launched its own generative AI play called Bard.
Such technologies are poised to dramatically change the way consumers interact with apps, but, as AWS noted in its own blog post, they can be prohibitively difficult to use and deploy.
"Building, training, and deploying large language and vision models is an expensive and time-consuming process that requires deep expertise in machine learning (ML)," AWS said in its blog. "Since the models are very complex and can contain hundreds of billions of parameters, generative AI is largely out of reach for many developers."
The partnership with Hugging Face will let AWS developers mine Hugging Face's collection of generative AI models for their apps, while taking advantage of AWS technologies that are specifically designed for machine learning, including the new Inf2 and Trn1 instances for EC2 and the Amazon SageMaker machine learning platform. The benefits, according to AWS, are faster training times, low latency and high scalability, all at relatively low cost and with little expertise required.
"The future of AI is here, but it's not evenly distributed. Accessibility and transparency are the keys to sharing progress and creating tools to use these new capabilities wisely and responsibly," said Hugging Face CEO Clement Delangue in a prepared statement. "Amazon SageMaker and AWS-designed chips will enable our team and the larger machine learning community to convert the latest research into openly reproducible models that anyone can build on."
AWS customers can access Hugging Face models through SageMaker. More information is available here.
About the Author
Gladys Rama (@GladysRama3) is the editor of Redmondmag.com, RCPmag.com and AWSInsider.net, and the editorial director of Converge360.