More Britons view AI as a risk than an opportunity
22/09/2025 | The Guardian
A new survey from the Tony Blair Institute (TBI) of over 3,700 adults has found that 38% of Britons view artificial intelligence (AI) as a risk to the economy, compared to just 20% who see it as an opportunity, with a lack of public trust identified as the most significant barrier to adoption.
The TBI warned that these findings pose a threat to the government's ambition for the UK to become an AI superpower.
According to Jakob Mökander, Director of Science and Technology Policy at the TBI, the UK's likely path to becoming an AI superpower is to become a world-leading adopter rather than developer of such technologies. He went on to explain that this goal cannot be achieved unless the government builds broad public trust in AI.
The survey also found a notable divergence between AI users and non-users. More than half of people who have not used AI see it as a risk, whereas only a quarter of regular users view it as a threat.
The TBI's report recommended five ways to build public trust, including increasing public use of AI, highlighting helpful AI use cases, measuring AI's beneficial impacts, implementing responsible regulation, and launching programmes to build AI skills.
Meanwhile, a Financial Times (£) analysis of hundreds of corporate filings and executive transcripts has found that while most S&P 500 companies are discussing AI, few can articulate how the technology is benefiting their business. The analysis reveals a stark contrast between executives' public statements and the more cautious tone of their regulatory filings.
While a vast majority of executives' public comments on AI were wholly positive, the filings paint a more sober picture. Many companies are driven by a "fear of missing out" rather than a clear strategy. The filings also reveal a growing list of concerns, with cybersecurity being the most common risk cited by over half of the S&P 500 in 2024. Companies worry about the potential for AI to cause security incidents, including the compromise of sensitive data.
The second largest concern is the fear that AI implementation will fail, which appears to be a reasonable worry. A recent study found that 95% of generative AI pilots in the workplace failed, often because current AI tools lack features like long-term memory that would allow them to be easily integrated into existing systems.
Legal and regulatory risks are also a significant concern, with companies expressing worry over potential lawsuits for using copyrighted material to train their AI models. The analysis concludes that companies appear to have a clearer understanding of the potential problems with AI than they do of the upsides, with some even stating there is no guarantee that AI initiatives will be successful or profitable.
£ - The Financial Times article requires a subscription.
Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.