Former OpenAI employee raises over safety concerns

20/05/2024 | The Guardian

A former senior employee at OpenAI, Jan Leike, has raised concerns about the company's focus on "shiny products" over safety. As the co-head of superalignment at OpenAI, Leike was responsible for ensuring that powerful artificial intelligence (AI) systems adhered to human values and aims. His departure, following the launch of OpenAI's latest AI model, GPT-4o, has sparked discussions about the company's safety culture and processes. This comes ahead of the second global AI summit in Seoul, where oversight of AI technology will be a key topic of discussion.

Read Full Story
OpenAI ChatGPT

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.