A new report published by the Organization for Economic Cooperation and Development (OECD) provides a comprehensive definition of what constitutes an artificial intelligence (AI) incident or hazard and clarifies related terminology.
The report classifies AI Incidents as events where "the development or use of an AI system results in actual harm," while AI hazards are defined as events where "the development or use of an AI system is potentially harmful."
In related news, The Guardian reports that a new report from the Centre for Long-Term Resilience (CLTR) recommends that the next government establish a system for logging AI-related incidents in public services and consider creating a central hub for collecting such data. According to the report, an incident reporting regime similar to the one used by the Air Accidents Investigation Branch is vital for the successful deployment of AI. The report cites 10,000 AI "safety incidents" recorded by news outlets since 2014 and stresses the importance of incident reporting in managing risks.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.