News

Amazon Backs AI Security Standards Group Beside OpenAI, Microsoft

A new industry group aimed at promoting AI security standards has a high-profile roster of backers, including Amazon, Microsoft, OpenAI and others.

The Coalition for Secure AI (CoSAI) launched on Thursday, describing itself as an "open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems."

Amazon is listed as a founding member, alongside OpenAI, Anthropic, Cisco, Cohere, Chainguard, GenLab and Wiz. Listed as "founding Premier Sponsors" are Microsoft, Nvidia, Google, IBM, Intel and PayPal.

Other members include academics, industry leaders and "other experts."

The primary mission of CoSAI is to "develop comprehensive security measures that address AI systems' classical and unique risks." This is difficult to do in the current AI landscape, the group argues, because existing efforts to establish AI security standards are fragmented, uncoordinated and inconsistently applied.

Though it recognizes those efforts and plans to collaborate with other groups focused on AI security, CoSAI believes it is uniquely positioned to establish standards that can be widely agreed-upon and adopted.

"From day one, AWS AI infrastructure and the Amazon services built on top of it have had security and privacy features built-in that give customers strong isolation with flexible control over their systems and data," said Paul Vixie, vice president and Distinguished Engineer at Amazon Web Services, in a prepared statement.

"As a sponsor of CoSAI, we're excited to collaborate with the industry on developing needed standards and practices that will strengthen AI security for everyone."

Per CoSAI's founding charter, the group intends to find and share mitigations for AI security risks such as "stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, scaled abuse prevention, membership inference attacks, model inversion attacks or gradient inversion attacks to infer private information, and extracting confidential information from the training data."

Interestingly, the group does not consider the following areas to be part of its purview: "misinformation, hallucinations, hateful or abusive content, bias, malware generation, phishing content generation or other topics in the domain of content safety."

At its outset, CoSAI plans to pursue the following three research areas:

  • AI software supply chain security: The group will explore how to assess the safety of a given AI system based on its provenance. For instance, the group will examine who trained the AI system and how, as well as whether its training process may have left the AI vulnerable to tampering at any point.
  • Security framework development: The group will identify "investments and mitigation strategies" to address the security vulnerabilities in both today's AI systems, as well as future versions.
  • Security and privacy governance: The group will create guidelines to help AI developers and vendors measure risk in their systems.

CoSAI expects to release a paper by the end of this year providing an overview of its findings in these three areas. The paper will be followed by a "media event" in early 2025.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

Subscribe on YouTube