An article in New Scientist highlights an inherent characteristic in the way large language models work, which means they are unable to forget what they have learned. As a result, computer scientists are working to develop "machine unlearning" solutions to teach AIs to forget. While this is a difficult task, the work could be critical in addressing concerns over privacy and misinformation. Furthermore, it could be necessary to engineer AIs to forget if we want to build models that learn and think like humans.
£ - This article requires a subscription.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.