Advertisement
Technologists: Smarter-Than-Humans A.I. Will Likely Be Here by 2030
Most members of a DealBook Summit panel described immense benefits from artificial intelligence and saw its risks as manageable.
This article is part of our special section on the DealBook Summit that included business and policy leaders from around the world.
Powerful technologies have always been double-edged swords. That’s been true since fire; it could cook your food and keep you warm, but, out of control, burn down your hut.
Modern artificial intelligence is poised to take the mixed-blessing principle to new heights, a technology moving faster and further than anything seen before. That was the prevailing view of 10 leading technologists and tech policy experts in a discussion on Dec. 4 at the DealBook Summit in New York, led by Kevin Roose, a columnist for The New York Times and co-host of the “Hard Fork” podcast.
Big tech, venture capital, nonprofits and academia were all represented on the panel. The group was mainly a collection of people who believe that artificial intelligence is rapidly advancing. To start off, Mr. Roose asked for a show of hands of those who agreed with the statement that there was a 50 percent chance or better that artificial general intelligence — a system with the ability to outperform human experts at virtually all cognitive tasks — would be achieved by 2030. Seven hands went up.
Peter Lee, the president of Microsoft’s research division, said he was excited by how the underlying mathematical models that had excelled at learning from human language to create chatbots like ChatGPT were “just as adept at learning from nature.”
That realization, he said, is leading labs and start-ups around the world to focus on applying A.I. to conquer big challenges in science — speeding up drug discovery, producing new materials and improving the prediction of severe weather events.
The most chilling vision of A.I. science turning against humanity has been that the technology would be used to produce bioweapons, like a new killer virus.
But Dan Hendrycks, director of the Center for A.I. Safety, said that threat worried him less than it did a year ago. The companies making powerful A.I. models, he said, have developed safeguards so it is more difficult for them to be used to produce bioweapons. With persistent vigilance, he said, “This may not be too much of a hazard.”
Sarah Guo, a founder of Conviction, a venture capital firm, agreed that biology and materials science were attractive targets for accelerating progress using A.I. But beyond that, she sees artificial intelligence as a “very democratizing technology,” by automating high-cost human expertise in fields like law, medicine and education to make those services more affordable and accessible.
In education, Ms. Guo said, research has long shown that the thing that delivers the biggest gains in student achievement is one-on-one tutoring. “What if you can give everybody a personalized tutor?” she asked. Or personalized medical advice that is as reliable as a human doctor? The potential for artificial intelligence to democratize the availability of expertise, she said, is “something we’re really inspired by.”
The prospect of A.I. automating broad areas of human expertise is what worries Ajeya Cotra, who studies the potential risks from A.I. at Open Philanthropy, a research foundation. Ms. Cotra described a future world in which “A.I. systems have made human expertise obsolete.” “Maybe you have a human C.E.O., but they’re a figurehead,” she observed. “They have to basically listen to their A.I. adviser that is able to keep up with what’s going on better than they can.”
Military campaigns, similarly, would be waged not only by A.I.-powered drones, but also by A.I. tacticians and A.I. generals. And in every field, there would be specialized A.I. agents — A.I. lawyers, A.I. policymakers, A.I. police and others, smarter and faster than their human counterparts.
To keep up, people and institutions would be forced to adopt A.I. Trying to opt out would be like “not using electricity today,” Ms. Cotra said. “You just can’t do it.”
Rana el Kaliouby, co-founder of Blue Tulip Ventures, which invests in A.I. start-ups, said she was optimistic about A.I. assistants’ helping people to lead healthier and more productive lives. But she is worried about the unchecked development of software designed as A.I. friends or A.I. companions, especially their impact on young people. Her 15-year-old son is “tech-forward,” she said. “But I really hope he doesn’t have an A.I. friend because I don’t know that we have the right guardrails.”
Eugenia Kuyda is the chief executive of Replika, which was founded eight years ago and essentially created the business of A.I. friends. The digital companions are intended to help people who “experience some sort of loneliness” and improve their mental health. Most people who have Replika friends are 35 or older, she said. The service does not allow anyone under 18 to join, and it developed strict age-verification procedures over the years.
“That comes from my personal belief that we’re just not ready,” said Ms. Kuyda, who has two daughters. “We shouldn’t be experimenting on kids.”
Rising public anxiety about A.I. threatens to slow its adoption. Mr. Roose, the moderator, cited a survey by the Pew Research Center last year that found that 52 percent of Americans were more concerned than excited by A.I., up from 38 percent the previous year.
Predictions that millions of jobs may someday be lost to A.I. software and robots have fueled the worries of workers, the panelists agreed. But they also observed that concerns about the introduction of new technologies were typical. In the 19th century, there were fears that railways, moving fast, would cause people’s organs to collapse, for example.
Josh Woodward, vice president of Google Labs, said he thought A.I. adoption by businesses and individuals would be faster than it has been so far. But he said it is still very early for A.I. in the mainstream — about year two of a decade-long transition.
Mr. Woodward described the first wave of A.I. software as chatbot based. But increasingly, there will be A.I. apps that redefine the future of knowledge, at work and at home. “There are loads of ways creativity is going to be unlocked,” mostly by humans and A.I. technology working together, Mr. Woodward said.
The group also picked up the geopolitics of A.I. . There have been calls in some policy circles for the equivalent of an A.I. Manhattan Project to stay ahead of the Chinese. Mr. Roose asked if that was a good idea.
“It seems to me we have three or four or five of them already,” said Marc Raibert, executive director of The AI Institute and founder of the robotics company Boston Dynamics, pointing to the billions of dollars a few tech companies are pouring into A.I.
But Mr. Raibert did see a smaller, more focused role for the government — funding to “keep the embers alive for ideas” that are not yet commercial. The government did that effectively, he said, in robotics and early internet technology.
Ms. Kuyda had a straightforward solution to winning the global A.I. competition: Open the immigration window to any computer scientist, mathematician or physicist working in the field.
Visas to live and work in the United States can be difficult to obtain. Changing that should be a priority, Ms. Kuyda said. “I grew up in Russia,” she said. “We all want to live here. Most people do want to live here. Most researchers in China want to live here. That is the competitive advantage.”
But enlightened policy is often sidelined by political reality, said Tim Wu, a professor at Columbia Law School and a former White House special assistant for technology and competition policy in the Biden administration. He was skeptical of immigration reform to bring in more A.I. talent. “It’s so centered on the southern border,” Mr. Wu said of immigration policy. “That’s just one of the ways U.S. policy is screwed up.”
Takeaways
Artificial intelligence may deliver a century of scientific progress in a decade.
A.I. could make human expertise obsolete, raising the prospect A.I. agents running companies, the government and the military.
The best strategy to stay ahead of China in the A.I. race? Immigration reform. Make it easier for foreign A.I. researchers to come to America. They want to.
Moderator: Kevin Roose, columnist, The New York Times
Participants: Dan Hendrycks, director, the Center for AI Safety; Jack Clark, co-founder and head of policy, Anthropic; Rana el Kaliouby, co-founder, Blue Tulip Ventures; Eugenia Kuyda, chief executive, Replika; Peter Lee, president, Microsoft research division; Josh Woodward, vice president, Google Labs; Sarah Guo, founder, Conviction; Ajeya Cotra, Open Philanthropy; Marc Raibert, executive director, The AI Institute, and founder, Boston Dynamics; Tim Wu, Julius Silver Professor of Law, Science and Technology, Columbia Law School
A version of this article appears in print on Dec. 12, 2024, Section
F
, Page
7
of the New York edition
with the headline:
Smarter-Than-Humans A.I. Is Near. Order Reprints | Today’s Paper | Subscribe
Advertisement