BSI research warns of emerging AI governance gap

28/10/2025 | BSI

New research from the British Standards Institute (BSI) warns of an emerging artificial intelligence (AI) governance gap as businesses increase investment without adequate oversight. The global study, based on an AI-assisted analysis of over 100 annual reports from multinationals and two global surveys of more than 850 senior business leaders, highlights the disparity between aggressive investment and the implementation of safeguards.

The research revealed that 62% of business leaders expect to increase AI investment in the next year, with 61% focusing on boosting productivity and 49% on reducing costs. However, a striking absence of governance processes was found across the surveyed organisations. Less than 24% of all organisations reported having an AI governance programme, although this figure rose to 34% in large enterprises. While 47% report that AI use is controlled by formal processes, only 34% use voluntary codes of practice, 24% monitor employee use of AI tools, 30% have processes to assess and mitigate AI-related risks, and only 22% restrict employees from using unauthorised AI.

In relation to data collection, only 28% of leaders know where their business sources data from to train or deploy its AI tools. In addition, risk management is declining, with only 49% of executives saying AI-related risks are included within broader compliance obligations, down from 60% six months earlier. Furthermore, there is limited focus on managing errors, with only 32% of organisations having a process for logging issues or inaccuracies with AI tools and only 29% report having AI incident reporting processes. 

The report also indicates an underemphasis on human capital, with 56% of leaders reporting confidence that entry-level employees have the necessary skills. In contrast, only 34% have implemented a dedicated AI learning and development programme.

In related news, an MIT Sloan Management Review (£) highlights the challenges of implementing responsible AI (RAI) practices across industries. While companies are increasingly adopting RAI principles like fairness, accountability, and transparency, actual implementation often falls short, leading to biased outcomes and user backlash. In response, governments have introduced new laws and regulations, such as the EU Artificial Intelligence Act (AI Act), pressing organisations to enhance transparency, safety, and human oversight. However, progress is inconsistent, and companies risk integrating errors or biases into their processes, potentially causing serious ethical violations. 

£ - This article requires a subscription. 


Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.

Read Full Story
artificial intelligence, ai security

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.