According to PwC's 2024 Global Digital Trust Insights survey, the number of businesses that have encountered data breaches worth more than $1 million has increased significantly, from 27% to 36% year over year. The report, which surveyed 3,800 business and tech leaders across 71 countries, also found that companies have mixed feelings about the rise of Generative AI (GenAI), with many investing heavily in cybersecurity to protect against cyberattacks.
As the use of GenAI increases across industries, so does the need for AI governance, particularly concerning bias, discrimination, misinformation, and unethical uses. Proposed regulations such as the draft EU Artificial Intelligence Act promote ethical AI, and policymakers worldwide are working to set limits and increase accountability, recognising the potential for GenAI to impact society profoundly. To stay ahead of AI regulations, smart organisations should act quickly. The survey indicates that 37% of respondents expect AI regulation to significantly affect future revenue growth, and 75% anticipate significant compliance costs. A further 39% of respondents say they will need to make substantial changes to comply with AI regulations. Despite the risks, the enthusiasm for GenAI is high, with 63% of executive respondents saying they would launch GenAI tools in the workplace without internal controls for data quality and governance. However, without governance, the adoption of GenAI tools poses privacy risks, and without proper training, people might base recommendations on invented data or biased prompts. Therefore, it is essential to lay the foundation for trust in GenAI by focusing on data governance and security concerns. Most respondents (77%) intend to use GenAI ethically and responsibly.
In conclusion, the report finds that good governance is vital to ensure GenAI is designed, functions, and produces outputs in a trustworthy manner. While AI is often considered a function of technology, human supervision and intervention are essential to AI's ideal uses. The promise of GenAI ultimately rests on people; organisations should, therefore, invest in the stewards of this technology.
Meanwhile, a separate Generative AI Survey of 2300 digital trust professionals conducted by ISACA revealed an alarming trend that less than one-third of organisations consider AI risk as a top priority. This is despite 79% of the professionals surveyed believing that adversaries are using AI just as successfully as digital trust professionals. The survey identified the top five risks associated with AI as misinformation/disinformation (77%), privacy violations (68%), social engineering (63%), loss of IP (58%), and job displacement (35%). ISACA highlighted that these risks have real-world implications and can significantly impact an organisation's security posture.
A related article in Medium featured excerpts from the book 60 Leaders on AI (2022), asking about the growing importance of AI in business and the need for organisational changes and new leadership roles. The experts were asked questions about the ideal organisational structure to support AI and whether a Chief AI Officer (CAIO) role is required to lead digital and AI transformation.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.