The UK is planning to announce an international advisory group on artificial intelligence at the AI safety summit next month as ministers seek to carve out a global approach to tackling the risks associated with the technology. People briefed on the government’s thinking said the UK aims to launch an international group at the summit to advance knowledge of the technology’s capabilities and risks. The new group would be loosely modelled on the UN Intergovernmental Panel on Climate Change.
One official said the government’s plans for the advisory group would comprise a rotating cast of academics and geographical expert professionals who would likely write an annual report on cutting-edge developments in AI. The group would be distinct from a planned UK AI safety institute, which would evaluate national security risks associated with machine learning models and whose creation is expected to be announced in the coming weeks. The government said discussions at the AI summit would “involve exploring a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks”. Frontier AI is a sophisticated form of the technology that includes large language models that can generate humanlike text, images and code.
In related news, the Financial Times (£) reports that China has reportedly agreed to attend the summit being held at Bletchley Park next month. The summit is intended to bring together policymakers and tech executives from all over the world to discuss an international approach to governance of the rapidly developing technology. Prime Minister Rishi Sunak has hailed the development as an important opportunity for global collaboration. Two Chinese government officials have confirmed that at least one representative will be attending.
Elsewhere, the South China Morning Post reports that the Cyberspace Administration of China (CAC) has announced the release of the Global AI Governance Initiative. The new AI framework aims to ensure equal rights to nations developing AI, regardless of their size, strength or social system. The initiative opposes drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI.
Meanwhile, the Financial Times (£) reports that Yann LeCun, Meta's chief AI scientist, has cautioned against regulating AI too soon, as the approach will only strengthen the hold of big tech companies and limit competition. LeCun, one of the leading AI researchers in the world, stated that regulating research and development in AI is unproductive and under the guise of AI safety, tech companies want regulatory capture. LeCun believes that demands to police AI come from the superiority complex of some leading tech companies who think they're the only ones capable of developing AI safely. He added that regulating leading-edge AI models today is like regulating the jet airline industry in 1925 before aeroplanes were invented. He also stated that the debate on existential risk is premature until the industry has a system design that can even rival a cat in terms of learning capabilities.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.