OpenAI leaders call for regulation superintelligent AI

23/05/2023 | The Guardian

OpenAI's leadership has called for the regulation of superintelligent artificial intelligence (AI) systems to prevent the risk of creating something that could potentially destroy humanity. They called for an international regulator to inspect systems, conduct audits, and test for compliance with safety standards. 

In the next decade, AI systems are predicted to exceed expert skill levels in most domains, carrying out as much productive activity as one of today’s largest corporations. Superintelligence has the potential for both upsides and downsides, making it more powerful than other technologies in the past. 

To ensure safety, there needs to be coordination among companies working on AI research, either through government-led projects or collective agreements.

Read Full Story
Artificial intelligence (AI)

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.