Whittaker raises privacy concerns over AI agents

09/09/2025 | The Economist

An article by Signal president Meredith Whittaker for The Economist warns of significant privacy and security risks associated with the rise of AI agents, which are being integrated into the core of operating systems (OS) by companies like Apple, Google, and Microsoft. Whittaker argues that the pursuit of this technology, driven by a need for profitability, is causing companies to discard basic lessons in digital security.

Whittaker explains that for AI agents to function, they require near-total access to a user's digital life, including browser history, private messages, and location data. This creates a powerful tension between privacy and the AI agents' broad access to sensitive information. The harms are already evident, with researchers demonstrating how AI agents can be coaxed into revealing confidential data or tricked by hackers into performing harmful actions. One example cited involved Apple's Siri, which was found to be transmitting voice transcripts of end-to-end encrypted WhatsApp messages to Apple's servers, undermining the app's end-to-end encryption guarantee.

While time is running out, Whittaker highlights that it's not too late and recommends a fundamental shift in the approach to Agentic AI development. She states that privacy must be the default, with control remaining in the hands of application developers. This should include a straightforward, well-documented mechanism at the OS level that allows developers to designate sensitive applications as off-limits to agents. In addition, Whittaker calls for radical transparency from OS vendors, who have an obligation to be clear about what data their agents are accessing, how it is being used, and the security measures in place to protect it.

£ - This article requires a subscription.


Training Announcement: Freevacy offers a range of independently recognised professional AI governance qualifications and AI Literacy short courses that enable specialist teams to implement robust oversight, benchmark AI governance maturity, and establish a responsible-by-design approach across the entire AI lifecycle. Find out more.

Read Full Story
Meredith Whittaker

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.