ORG warns Home Office use of AI in asylum cases likely unlawful
16/03/2026 | Open Rights Group
A legal opinion published by the Open Rights Group (ORG) suggests the Home Office may be acting unlawfully by failing to inform asylum applicants that artificial intelligence (AI ) is being used in their assessments. The government department currently employs several generative AI tools, including a system based on ChatGPT-4, to summarise asylum interview transcripts and search internal policy documents. Legal experts argue that because these tools generate new text rather than simply indexing data, their opaque use breaches procedural fairness, data protection laws, and the government’s own AI Playbook standards.
Furthermore, the opinion highlights significant accuracy concerns, citing Home Office evaluations in which 9% of AI-generated interview summaries were so flawed that they had to be removed from trials, while 5% of staff using the policy search assistant reported a lack of confidence in its accuracy. Barristers warn that these inaccuracies create a substantial risk of life-changing decisions being based on material errors of fact, such as the omission of crucial evidence or the fabrication of details.
By failing to disclose AI involvement, the Home Office allegedly prevents vulnerable applicants from identifying and correcting potential hallucinations or biases in their records. The legal analysis argues that the department has failed to meet its obligations regarding transparency and meaningful human control. This lack of openness contradicts international ethical principles on human rights and privacy that the UK has committed to upholding. As such, the findings reveal an opening for legal challenges from applicants who believe AI tools have unfairly influenced the determination of their protection status in the UK.
Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professional. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.