Lagarde laments Europe's AI testing disadvantage as new models advance
Published: 08/05/2026
| Last Updated: 13/05/2026
| Reuters
European Central Bank (ECB) President Christine Lagarde has warned that Europe remains at a disadvantage regarding Anthropic's Mythos Preview AI model. The model is intended to identify weaknesses in computer code to improve cybersecurity. However, experts are concerned that it could also be used to escalate attacks on banks. Currently, access to Mythos Preview is restricted to US-based companies and a few select international partners, a situation Lagarde described as "creating an unequal playing field".
Despite the lack of access, the ECB is actively developing countermeasures and identifying necessary defences against potential misuse by malign or state-sponsored actors. Lagarde noted that state-sponsored attacks are a particular concern due to the significant computing power required to operate such models. In response to these emerging threats, the ECB has begun questioning financial institutions about their readiness to manage a new generation of cybersecurity-focused AI models.
Not all AI developers are prioritising US companies, as OpenAI is reportedly in talks with the Commission over access to its latest cybersecurity-focused AI model, GPT-5.5. According to Politico, the company's lead executive on the initiative, former UK Chancellor George Osborne, wrote to the Commission offering access to the advanced AI model.
However, in a sign that AI-powered hacking has arrived, a new report from Google's threat intelligence group warns that use of AI tools to enhance criminal activity has evolved into an industrial-scale threat within three months. Criminal organisations and state-linked actors from China, North Korea, and Russia are reportedly using commercial models, including Gemini, Claude, and OpenAI tools, to refine and scale their operations. These actors leverage AI to increase the speed and sophistication of attacks, using the technology to build malware and exploit software vulnerabilities.
The report highlights that a criminal group recently attempted to execute a mass exploitation campaign using a zero-day vulnerability and a large language model. Furthermore, threat actors are experimenting with OpenClaw, a tool known for lacking guardrails. Analysts suggest the AI vulnerability race has already begun, marking a significant shift in the global cybersecurity landscape, as automated tools enable attackers to persist against targets more effectively.
As businesses face mounting pressure to adopt AI-driven security, the National Cyber Security Agency (NCSC) has published a blog article outlining ten essential questions to evaluate security, legal, and operational risks before deployment.
Two further independent studies suggest that Mythos Preview and GPT-5.5 continue to advance at a pace.
First, a new study by the UK AI Security Institute (AISI) confirmed that the number of autonomous tasks that frontier AI models can complete has been doubling every few months. Recent evaluations of newer models, including Mythos and GPT-5.5, reveal doubling trends that substantially exceeded previous capabilities.
Testing indicates that these advanced models are extraordinarily capable of identifying vulnerabilities and converting them into critical exploit paths in near-real time. While not a perfect measure of real-world impact, the accelerating rate of change suggests a growing potential for AI capabilities to translate into tangible security risks.
Meanwhile, a study by Palo Alto Networks concludes that these models have been significantly limited from general use in an attempt to give defenders time to find and fix vulnerabilities before attackers find and exploit them.
Additional reporting by IAPP.
Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professional. Find out more.
Credit Alessia Pierdomenico, Shutterstock
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 3,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.