AISI discusses OpenAI and Anthropic collaboration to test security of AI models

15/09/2025 | AISI

The UK AI Security Institute (AISI) has announced details of its collaborations with the US Center for Standards and Innovation (CAISI), OpenAI and Anthropic to identify and strengthen artificial intelligence (AI) system safeguards. The AISI’s security experts worked with both AI developers to identify vulnerabilities, with the two companies providing in-depth access to non-public tools and safeguarding information. The purpose of this work is to help leading developers strengthen the security of their systems and to better equip governments with an understanding of AI risks. 

For further information, see the OpenAI and Anthropic blog posts.


Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.

Read Full Story
Anthropic and OpenAI

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.