Life Buzz News

Artificial Intelligence in 2030


Artificial Intelligence in 2030

At the DealBook Summit, ten experts in artificial intelligence discussed the greatest opportunities and risks posed by the technology.

Modern artificial intelligence is expected to be one of the most consequential technologies in history. But there is a big debate over what those consequences will be: Will the technology power an age of prosperity, in which humans work less? Will it be used to wipe out humanity?

In a discussion at the DealBook Summit moderated by Kevin Roose, a technology columnist for The Times and a co-host of the Times tech podcast, "Hard Fork," 10 experts discussed the greatest opportunities and risks. Here's what they said.

The opportunities

In a live poll, seven of the experts indicated they thought there was a 50 percent chance or greater that artificial general intelligence -- the point at which A.I. can do everything a human brain can do -- would be built before 2030. But most of the potential opportunities experts pointed out could materialize well before then. Josh Woodward, vice president of Google Labs, said A.I. could help humans create in different mediums, for example.

Peter Lee, the president of Microsoft Research, pointed out a wide range of potential applications:

"We might be able to do things like drastically speed up drug discovery or find targets for drugs that are currently considered undruggable. Or we could predict severe weather events days or even a couple of weeks in advance. Even mundane things like, I don't know, making your vegan food taste better or your skin tone fresher looking."

A.I. can personalize lesson plans for students, said Sarah Guo, the founder of Conviction, a venture capital firm. She added that a similar approach could make everything from specialized medical services to legal advice more accessible.

The technology could also have a broader impact on daily life, Guo said. "I think we're going to continue to have a market economy, and people will see a significant part of their value to be society and their identity be determined by their work," she said. But she added that expectations for what "an improved speed of scientific discovery and cheaper health care and education means in the world should be a little bit more positive." Of her own expectations, she said: "In a future where you have a high quality of life, where there is enough productivity, where you can do less work, and the work you do is what you choose, I think people learn and entertain and create."

The risks

Some of the top figures in A.I. have warned of its potential risks. Geoffrey Hinton, a former Google researcher who won the Nobel Prize this year, has pointed to potential hazards including misinformation and truly autonomous weapons. More than 1,000 A.I. leaders and researchers signed an open letter last year saying that A.I. tools posed "profound risks to society and humanity" and urged development labs to pause work on the most advanced A.I. systems. In a separate letter last year, leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs signed a one-sentence statement: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Geopolitics

Dan Hendrycks, director of the Center for A.I. Safety, which released the statement signed by OpenAI, DeepMind and Anthropic executives, said his top fear about A.I. had changed. It used to be bioweapons, he said, but A.I. chatbots now have improved safety mechanisms that mostly stop them from providing instructions to make weapons. Now his biggest fear is geopolitics. "For instance, if China invades Taiwan later this decade, that's where we get all of our A.I. chips from," he said. "So that would be a fairly plausible world where the West would summarily fall behind."

Some industry leaders, including Alexander Karp, the chief executive of Palantir Technologies, have argued that the U.S. needs a program to accelerate development of A.I. technology, similar to how it established the Manhattan Project to develop nuclear weapons, to keep it from falling behind the rest of the world. At the DealBook Summit, Marc Raibert, the founder of Boston Dynamics, the robotics company, disagreed. "It seems to me we have about three or four or five of them already if you look at the big companies who are investing 10s or 20s or 30s of billions of dollars in it," he said, referring to the handful of companies building generative A.I. models, which includes Meta, Google and OpenAI, which is spending more than $5.4 billion a year to develop A.I.

Eugenia Kuyda, the founder of Replika, an A.I. companion company, said that if the U.S. government wanted to accelerate A.I. research, it should start by making it easier for A.I. scientists to immigrate.

"It's almost impossible to bring people here," she said, adding of A.I. scientists, "it's actually much harder to get a visa if you're coming with one of those degrees."

Economic insecurity

In another live poll, six of the 10 panelists indicated they believed A.I. will create more jobs than it destroys. "A lens that I use to think about the A.I. revolution is that it will play out like the Industrial Revolution but around 10 times faster," said Ajeya Cotra, who leads grant making for research on potential risks posed by advanced A.I. at Open Philanthropy.

But the vision of widespread economic prosperity that some think A.I. puts within reach isn't a given. "Things can go in different directions," said Tim Wu, a professor at Columbia Law School and former special assistant to the president for technology and competition policy in the Biden administration. "The plow made a lot of farmers able to be self-sustaining. But something like the cotton gin reinforced plantation model and led a lot of workers who were enslaved to terrible life conditions."

Wu said the A.I. revolution could lead to economic security that has bigger geopolitical affects, similarly to how the Industrial Revolution arguably led to World War I or World War II.

Empowering bad actors

Cotra refers to a future in which A.I. makes most of the decisions as "the obsolescence regime." In this future, she said, "to refuse to use A.I. or even to spend too much human time double checking its decisions would be like insisting on using pen and paper or not using electricity today."

There may be danger in giving machines too much control, she argued:

I think a big reason that we don't have nasty engineered pandemics every year is that there are some tens of thousands of human experts that could create such pandemics, but they don't want to. They choose not to, because they're not sociopathic. But in this future world that I'm imagining, you would have expertise at the push of a button that could potentially be perfectly loyal to any individual who has the power, the money to buy computers to run those experts on. And I think about that in the context of say, democracy.President Trump, in his previous term, tried to push the limits in a bunch of different ways, tried to tell people underneath him to do things that were norm violating or illegal, and they pushed back. If all those people under him were instead 10 times as brilliant, but perfectly loyal -- programmed to be perfectly loyal -- that could be a destabilizing situation. Our society is sort of designed around some give and some autonomy from humans.

Another fear? That the machines could go rogue. If "they're running the economy," Cotra said, "they're running on our militaries, our governments, and if they were to kind of go awry, we wouldn't have very many options with which to stop that transition."

"A.I. slop"

One immediate fear cited by Hinton, the Nobel Prize-winning researcher, is that A.I. will flood the internet with so much false content that most people will "not be able to know what is true anymore."

At the DealBook Summit, Woodward, of Google Labs, said that he thought "A.I. slop" could increase the value of things that are created by humans. "Even labeling it, marking it as human created or other things don't seem far-fetched to me," he said.

"The value of taste, I think will go up," he added. "So taste from a user perspective, but also how companies like Google and others rank content and surface discover retrieve it."

Thanks for reading! We'll see you tomorrow.

We'd like your feedback. Please email thoughts and suggestions to [email protected].

Previous articleNext article

POPULAR CATEGORY

corporate

10487

tech

11384

entertainment

12831

research

5886

misc

13778

wellness

10239

athletics

13644