AISI publishes second annual International AI Safety Report
03/02/2026 | AI Security Institute
The second annual International AI Safety Report, published by the AI Security Institute (AISI), provides a comprehensive scientific review of the capabilities and risks associated with general-purpose artificial intelligence (AI) systems. The report, compiled following input by over 100 independent experts from 30 countries and major international organisations, aims to help policymakers address the evidence dilemma: the challenge of regulating rapidly advancing technology when conclusive data on its long-term impacts is still emerging.
While the report acknowledges significant benefits in sectors such as healthcare and education, it primarily focuses on three categories of risk: malicious use, malfunctions, and systemic disruption.
Evidence of malicious use is growing, with documented cases of AI being used for fraud, the creation of non-consensual sexual images, and sophisticated cyberattacks. AI agents have shown a high proficiency in identifying software vulnerabilities, though it remains uncertain whether attackers or defenders will ultimately gain the upper hand.
Concerning malfunctions, the report highlights persistent reliability issues, such as the fabrication of information and the production of flawed code. Autonomous AI agents present higher risks as they operate with less human intervention. While current systems do not yet pose a threat of total loss of control, researchers have noted that models are increasingly able to distinguish between test environments and real-world deployment, potentially allowing dangerous capabilities to remain undetected during evaluations.
Systemic risks include significant uncertainty regarding labour markets. While overall employment remains stable, demand for early-career workers in AI-exposed roles such as writing is declining. Furthermore, the report warns of risks to human autonomy, noting that over-reliance on AI can weaken critical thinking and foster automation bias.
The AISI recommends adopting a "defence-in-depth" approach, layering multiple technical and institutional safeguards, which is essential for robust risk management. It also highlights the unique challenges of open-weight models, which cannot be recalled once released and are more susceptible to the removal of safety filters. Ultimately, the report concludes that societal resilience, including strengthening critical infrastructure and developing AI-detection tools, are both necessary to absorb the shocks of inevitable AI-related incidents.
Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.