News

AWS re:Invent 2024: Top 'Responsible AI' Sessions

As advanced GenAI remakes the cloudscape, it's no surprise that the big upcoming AWS re:Invent conference is devoting more than a third of its 2,530 sessions to AI/ML, which is just one of 21 topics. That is AI dominance.

However, with the increasing dominance of AI are increasing concerns about its ethical use, leading to the rise of "responsible AI." And AWS is certainly not ignoring those concerns, with 36 of the 860 AI/ML sessions falling under that area of interest.

Here are the responsible AI sessions we're most interested in at AWS re:Invent 2024 taking place Dec. 2-6 event in Las Vegas (with online-only registration available).

Advancing responsible AI: Managing generative AI risk
"Risk assessment is an essential part of responsible AI (RAI) development and is an increasingly common requirement in AI standards and laws such as ISO 42001 and the EU AI Act. This chalk talk provides an introduction to best practices for RAI risk assessment for generative AI applications, covering controllability, veracity, fairness, robustness, explainability, privacy and security, transparency, and governance. Explore examples to estimate the severity and likelihood of potential events that could be harmful. Learn about Amazon SageMaker tooling for model governance, bias, explainability, and monitoring, and about transparency in the form of service cards as potential risk mitigation strategies."

Practicing responsible generative AI with the help of open source
"Many organizations are reinventing their Kubernetes environments to efficiently deploy generative AI workloads, including distributed training and inference APIs for applications like text generation, image generation, or other use cases. In this chalk talk, learn how to integrate Kubernetes with open source tools to practice responsible generative AI. Explore key considerations for deploying AI models ethically and sustainably, leveraging the scalability and resiliency tenets of Kubernetes, along with the collaborative and community-driven development principles of the open source CNCF tool set."

Responsible AI: From theory to practice with AWS
"The rapid growth of generative AI brings promising innovation but raises new challenges around its safe and responsible development and use. While challenges like bias and explainability were common before generative AI, large language models bring new challenges like hallucination and toxicity. Join this session to understand how your organization can begin its responsible AI journey. Get an overview of the challenges related to generative AI, and learn about the responsible AI in action at AWS, including the tools AWS offers. Also hear Cisco share its approach to responsible innovation with generative AI."

Responsible generative AI: Evaluation best practices and tools
"With the newfound prevalence of applications built with large language models (LLMs) including features such as Retrieval Augmented Generation (RAG), agents, and guardrails, a responsibly-driven evaluation process is necessary to measure performance and mitigate risks. This session covers best practices for a responsible evaluation. Learn about open access libraries and AWS services that can be used in the evaluation process, and dive deep on the key steps of designing an evaluation plan including defining a use case, assessing potential risks, choosing metrics and release criteria, designing an evaluation dataset, and interpreting results for actionable risk mitigation."

Build responsible generative AI apps with Amazon Bedrock Guardrails
"In this workshop, dive deep into building responsible generative AI applications using Amazon Bedrock Guardrails. Develop a generative AI application from scratch, test its behavior, and discuss the potential risks and challenges associated with language models. Use guardrails to filter undesirable topics, block harmful content, avoid prompt injection attacks, and handle sensitive information such as PII. Finally, learn how to detect and avoid hallucinations in model responses that are not grounded in your data. See how you can create and apply custom tailored guardrails directly with FMs and fine-tuned FMs on Amazon Bedrock to implement responsible AI policies within your generative AI applications."

Gen AI in the workplace: Productivity, ethics, and change management
"Navigating the transformative impact of generative AI on the modern workplace, this session explores strategies to maximize productivity gains while addressing ethical concerns and change management challenges. Key topics include ethical implementation frameworks, fostering responsible AI usage, and optimizing human-AI collaboration dynamics. The session examines effective change management approaches to ensure smooth integration and adoption of generative AI technologies within organizations. Join us to navigate the intersection of generative AI, productivity, ethics, and organizational change, charting a path toward an empowered, AI-driven workforce."

KONE safeguards AI applications with Amazon Bedrock Guardrails
"Amazon Bedrock Guardrails enables organizations to deliver consistently safe and moderated user experiences through generative AI applications, regardless of the underlying foundation models (FM). Join the session to deep dive into how guardrails provide additional customizable safeguards on top of the native protections of FMs, delivering industry-leading safety protection. Finally, hear from KONE's CEO on how they use Amazon Bedrock Guardrails to provide safe and accurate real-time AI support to 30,000 technicians that execute 80,000 field customer visits per day. Get their tips on adoption of responsible AI principles that deliver value while achieving productivity gains."

Safeguard your generative AI apps from prompt injections
"Prompt injection attacks pose a risk to the integrity and safety of generative AI (gen AI) applications. Threat actors can craft prompts to manipulate the system, leading to the generation of harmful, biased, or unintended outputs. In this chalk talk, explore effective strategies to defend against prompt injection vulnerabilities. Learn about robust input validation, secure prompt engineering principles, and comprehensive content moderation frameworks. See a demo of various prompts and their associated defense mechanisms. By adopting these best practices, you can help safeguard your generative AI applications and foster responsible AI practices in your organization."

Strategies to mitigate social bias when implementing gen AI workloads
"As gen AI systems become more advanced, there is growing concern about perpetuating social biases. This talk examines challenges associated with gen AI workloads and strategies to mitigate bias throughout their development process, and discusses features such as Amazon Bedrock Guardrails, Amazon SageMaker Clarify, and SageMaker Data Wrangler. Join to learn how to design gen AI workloads that are fair, transparent, and socially responsible."

Developing explainable AI models with Amazon SageMaker
"As AI systems are increasingly used in decision-making, explainable models have become essential. This dev chat explores tools and techniques for building these models using Amazon SageMaker. It walks through several techniques for interpreting complex models, providing insights into their decision-making processes. Learn how to ensure model transparency and fairness in machine learning pipelines and how to deploy these models using SageMaker endpoints. This dev chat is ideal for data scientists focusing on AI ethics and model interpretability."

Note that the catalog doesn't allow direct linking to those session descriptions, but they can be accessed here.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube