News

AWS, Researchers Developing AI To Fight Medical Misinformation

Amazon Web Services (AWS) has been working with researchers on an AI system built on its cloud that would help public health officials detect and combat the spread of medical misinformation.

Currently in the prototype stage, the open source Project Heal toolkit aims to proactively identify and counteract trends in medical misinformation using predictive analytics, machine learning and AI. It was created in collaboration with the AWS Digital Innovation team and researchers from the University of Pittsburgh, the University of Illinois Urbana-Champaign (UIUC) and the University of California Davis Health Cloud Innovation Center (UCDH CIC).

"As demonstrated throughout the COVID-19 pandemic, community members' acceptance of dangerous medical rumors can have severe impacts on their health and livelihood, leading them to eschew professional diagnoses in favor of scientifically unfounded and potentially dangerous treatment paths," the CIC wrote in a blog post last month. "In many cases, this negatively impacts patients' health and recovery and, in extreme cases, causes loss of life. Currently, public health administrators do not have the ability to rapidly identify and respond to these false medical information trends."

The researchers described three types of bad medical information that Project Heal can help officials detect and prevent: misinformation (data that' factually incorrect), malinformation (data that is technically correct but has been taken out of context with the aim to mislead or cause harm) and disinformation (data that's deliberately meant to mislead or cause harm).

To address these threats, Project Heal relies heavily on AWS technologies. The list of AWS tools that it leverages includes Amazon ECS, AWS Fargate, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, AWS Lambda, Amazon S3, Amazon A2I, Amazon Neptune, Amazon SageMaker, Amazon CloudFront, Amazon API Gateway and Amazon DynamoDB.

Combined, these tools enable Project Heal to ingest data from the Internet to identify and categorize likely sources of medical misinformation, rating them by severity of their potential impact. AWS explained the process in a separate blog post:

The detection engine will be built using graph neural networks, which will be supported by a large, scalable, fully managed graph database (Amazon Neptune ). Amazon Comprehend will be used to support keyword and entity extraction from content. A human feedback loop will consistently audit and improve the model using Amazon Augmented AI (Amazon A2I). Both detection and scoring will be supported by Amazon SageMaker, a fully managed service to prepare data and build, train, and deploy ML models for any use. Additionally, generative AI will support the summarization and grouping of related misinformation content.

Project Heal is also meant to help public health officials quickly generate responses to counter the spread of false data. Using AI, the system will help users tailor their messaging to target specific communities, as well as include well-sourced, medically supported data points in their communications. AWS explains:

Project Heal will give public health professionals the ability to generate, edit, tweak, and adapt counter messaging of false claims for each population in their community. To achieve this, Project Heal will use generative AI foundation models (FMs) through Amazon Bedrock, such as Amazon Titan. Supporting evidence, built from trusted sources of information, will be available to users via Retrieval-Augmented Generation (RAG), an approach that reduces some of the shortcomings of large language model (LLM)-based queries. Through this technique, Amazon Bedrock is able to generate more personalized messaging by combining trusted information and user preferences. The following screenshot showcases what this functionality could look like.

The researchers demonstrated Project Heal to five public health officials, who responded positively to the prototype, according to AWS. "Due to Project Heal's intentional delineation between verified and non-verified data sources, all five users stated that they would trust the tool's ability to provide them with accurate information and communication," it said in the blog. "This is particularly promising, as a lack of public trust in AI technologies has proven to be a significant hurdle in driving adoption for certain AI/ML solutions."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

Subscribe on YouTube