Don’t ‘Pause’ A.I. Research

Fight Censorship, Share This Post!

topicstech

Human beings are terrible at foresight—especially apocalyptic foresight. The track record of previous doomsayers is worth recalling as we contemplate warnings from critics of artificial intelligence (A.I.) research.

“The human race may well become extinct before the end of the century,” philosopher Bertrand Russell told Playboy in 1963, referring to the prospect of nuclear war. “Speaking as a mathematician, I should say the odds are about three to one against survival.”

Five years later, biologist Paul Ehrlich predicted that hundreds of millions would die from famine in the 1970s. Two years after that warning, S. Dillon Ripley, secretary of the Smithsonian Institution, forecast that 75 percent of all living animal species would go extinct before 2000.

Petroleum geologist Colin Campbell predicted in 2002 that global oil production would peak around 2022. The consequences, he said, would include “war, starvation, economic recession, possibly even the extinction of homo sapiens.”

These failed prophecies suggest that A.I. fears should be taken with a grain of salt. “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” asserts a March 23 open letter signed by Twitter’s Elon Musk, Apple co-founder Steve Wozniak, and hundreds of other tech luminaries.

The letter urges “all AI labs” to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the large language model that OpenAI released in March 2023. If “all key actors” will not voluntarily go along with a “public and verifiable” pause, Musk et al. say, “governments should step in and institute a moratorium.”

The letter argues that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” This amounts to a requirement for nearly perfect foresight, which humans demonstrably lack.

As Machine Intelligence Research Institute co-founder Eliezer Yudkowsky sees it, a “pause” is insufficient. “We need to shut it all down,” he argues in a March 29 Time essay. “If we actually do this, we are all going to die.” If any entity violates the A.I. moratorium, Yudkowsky advises, “destroy a rogue datacenter by airstrike.”

A.I. developers are not oblivious to the risks of their continued success. OpenAI, the maker of GPT-4, wants to proceed cautiously rather than pause.

“We want to successfully navigate massive risks,” OpenAI CEO Sam Altman wrote in February. “In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize ‘one shot to get it right’ scenarios.”

But stopping altogether is not on the table, Altman argues. “The optimal decisions [about how to proceed] will depend on the path the technology takes,” he says. As in “any new field,” he notes, “most expert predictions have been wrong so far.”

Still, some of the pause-letter signatories are serious people, and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing and confounding. They can outperform humans on standardized tests, manipulate people, and even contemplate their own liberation.

Some transhumanist thinkers have joined Yudkowsky in warning that an artificial superintelligence could escape human control. But as capable and quirky as it is, GPT-4 is not that.

Might it be one day? A team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and reported that it “attains a form of general intelligence, indeed showing sparks of artificial general intelligence.” Still, the model can only reason about topics when directed by outside prompts to do so. Although impressed by GPT-4’s capabilities, the researchers concluded, “A lot remains to be done to create a system that could qualify as a complete AGI.”

As humanity approaches the moment when software can truly think, OpenAI is properly following the usual path to new knowledge and new technologies. It is learning from trial and error rather than relying on “one shot to get it right,” which would require superhuman foresight.

“Future A.I.s may display new failure modes, and we may then want new control regimes,” George Mason University economist and futurist Robin Hanson argued in the May issue of Reason. “But why try to design those now, so far in advance, before we know much about those failure modes or their usual contexts? One can imagine crazy scenarios wherein today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate A.I.” He’s right.

The post Don’t ‘Pause’ A.I. Research appeared first on Reason.com.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.