A.I. Needs an International Watchdog, ChatGPT Creators Say

The Latest

The leaders of OpenAI, the artificial intelligence research lab that developed the chatbot ChatGPT, have called for regulation of “superintelligent” A.I. technology, saying it “will be more powerful than other technologies humanity has had to contend with in the past.”

To regulate the risks of A.I. systems, there should be an international watchdog, similar to the International Atomic Energy Agency, the organization that promotes the peaceful use of nuclear energy, OpenAI’s founders, Greg Brockman and Ilya Sutskever, and its chief executive, Sam Altman, wrote in a note posted Monday on the company’s website.

“Given the possibility of existential risk, we can’t just be reactive,” they wrote.

Why It Matters: Concerns over powerful A.I. systems are growing.

Mr. Altman appeared before Congress on May 16 to implore lawmakers to regulate artificial intelligence. Congressional leaders shared their worries about the threats that A.I. could pose, including the spread of misinformation and privacy violations.

“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Mr. Altman said in his testimony before members of a Senate subcommittee.

In March, more than 1,000 technology leaders and researchers, including Elon Musk, the chief executive of Tesla and Twitter, called for a moratorium on the development of the most advanced A.I. systems, warning in an open letter that the tools presented “profound risks to society and humanity.”

In their latest note, the OpenAI leaders said that “it’s conceivable that within the next 10 years, A.I. systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.”

Background: Tech giants are competing for dominance in a fast-growing market.

The latest A.I. tools could upend the economics of the internet, turning today’s tech giants into has-beens and creating the industry’s next powerhouses.

Tech companies have spent billions of dollars on A.I., amid the rising concerns about its potential to match human reasoning and destroy jobs. Goldman Sachs estimated recently that A.I. could expose 300 million full-time jobs to automation.

BuzzFeed just introduced a chatbot that offers recipe recommendations.

What’s Next: Congress is trying to keep up.

At last week’s hearing, Senator Richard Blumenthal, Democrat of Connecticut and chairman of the Senate panel, acknowledged that Congress had failed to keep up with new technologies. He added that the hearing was the first in a series about the potential of A.I. and to eventually “write the rules” for it.

“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” he said.

But over the years, partisan squabbles and intense lobbying by the tech industry have stalled dozens bills intended to strengthen privacy, speech and safety rules.

Gregory Schmidt covers breaking news and real estate and is the editor of the Square Feet column. @GregoryNYC

Source: Read Full Article