The US National Institute of Standards and Technology (NIST) will publish v1.0 of its Artificial Intelligence Risk Management Framework (AI RMF) on 26 January. According to NIST, the voluntary standard aims to "improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation" of AI offerings. In addition, the AI RMF includes an accompanying AI playbook containing suggested actions, references and documentation guidance.
UPDATE: 260123 - NIST confirmed the publication of its Artificial Intelligence Risk Management Framework. In its announcement, NIST said the voluntary framework will help the private and public sectors "adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities."
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.