The US Department of Commerce has said it is considering introducing detailed reporting requirements for advanced artificial intelligence (AI) developers and cloud computing providers to ensure the technologies are safe and resilient against cyberattacks. The proposal includes mandatory reporting to the federal government on the development activities of "frontier" AI models and computing clusters, as well as reporting on cybersecurity measures and outcomes from red-teaming efforts. The aim is to test for dangerous capabilities and minimise the risk of misuse by foreign adversaries or non-state actors. A spokesperson for the US government said the reporting of such information is vital for ensuring that these technologies meet stringent safety and reliability standards and can withstand cyber threats.
In related news, Raconteur has posted an article concerning the security threat posed by generative AI and how corporate users can best protect the sensitive information they hold.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.