NIST proposes solution to solve ethical AI development dilemma

16/02/2024 | NIST

A team of researchers at the US National Institute of Standards and Technology (NIST) has proposed a solution to ensure that artificial intelligence (AI) systems are trained on data that adheres to ethical principles. The team suggests applying the same fundamental principles used by scientists to safeguard human subjects' research for decades. The core ideas of the Belmont Report of 1979, which are "respect for persons, beneficence and justice," form the basis of these principles. The researchers believe that these principles can be applied to AI to ensure transparency with research participants whose data may be used to train AI. Kristen Greene, a NIST social scientist and one of the paper's authors, stated that there is no need to reinvent the wheel as existing principles of human subjects research can be used to guide AI research.

Read Full Story
ethical artificial intelligence, AI Ethics

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.