News

AWS Boosts Machine Learning in Amplify Mobile Back-End

AWS Amplify, used as a back-end development framework for mobile and Web apps, is getting improved machine learning functionality.

Amplify, with roots going back to 2015 when it was called Mobile Hub, is described as "an opinionated set of libraries, UI components, and a command-line interface to build an app backend and integrate it with your iOS, Android, Web, and React Native apps."

Its services include storage, authentication & authorization, APIs (GraphQL and REST), analytics, push notifications, chat bots, AR/VR and now, improved machine learning functionality.

Specifically, the Amplify Framework now has a Predictions category, joining other pre-trained AI services designed to be leveraged by app developers without the benefit of advanced AI knowledge, such as image and video analysis, natural language, personalized recommendations, virtual assistants, and forecasting.

Developers can write the code amplify add predictions in order to configure an app to:

  • Identify text, entities, and labels in images using Amazon Rekognition, or identify text in scanned documents to get the contents of fields in forms and information stored in tables using Amazon Textract.
  • Convert text into a different language using Amazon Translate, text to speech using Amazon Polly, and speech to text using Amazon Transcribe.
  • Interpret text to find the dominant language, the entities, the key phrases, the sentiment, or the syntax of unstructured text using Amazon Comprehend.

"It has never been easier to add machine learning functionalities to a Web or mobile app," said AWS evangelist Danilo Poccia, who details how to do just that in a July 31 blog post.

An AWS announcement explains that Web developers can leverage the Amplify JavaScript library with the new Predictions category to add several AI/ML use cases -- including text translation, speech to text generation, image recognition, text to speech, and insights from text -- to Web applications.

Mobile developers, meanwhile, can use the new iOS and Android SDK for SageMaker to make inference requests on their SageMaker-hosed custom models with an HTTPS endpoint. "The Android SDK also includes support for Amazon Textract that allows developers to extract text and data from scanned documents. These services add to the list of existing AI services supported in iOS and Android SDKs."

More information is provided in the Predictions documentation, a get-started tutorial, the open source project's GitHub repository and a walkthrough.

About the Author

David Ramel is an editor and writer for Converge360.

Featured

Subscribe on YouTube