“We are approaching a time when machines will be able to outperform humans at almost any task,” said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.
“I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?” he asked at a panel discussion on artificial intelligence at the annual meeting of the American Association for the Advancement of Science.
Vardi said there will always be some need for human work in the future, but robot replacements could drastically change the landscape, with no profession safe, and men and women equally affected.
“Can the global economy adapt to greater than 50 percent unemployment?” he asked.
– Transform manufacturing –
Automation and robotization have already revolutionized the industrial sector over the last 40 years, raising productivity but cutting down on employment.
Job creation in manufacturing reached its peak in the United States in 1980 and has been on the decline ever since, accompanied by stagnating wages in the middle class, said Vardi.
Today there are more than 200,000 industrial robots in the country and their number continues to rise.
Today, research is focused on the reasoning abilities of machines, and progress in this realm over the past 20 years has been spectacular, said Vardi.
“And there is every reason to believe the progress in the next 25 years will be equally dramatic,” he said.
By his calculation, 10 percent of jobs related to driving in the United States could disappear due to the rise of driverless cars in the coming 25 years.
According to Bart Selman, professor of computer science at Cornell University, “in the next two or three years, semi-autonomous or autonomous systems will march into our society.”
He listed self-driving cars and trucks, autonomous drones for surveillance and fully automatic trading systems, along with house robots and other kinds of “intelligence assistance” which make decisions on behalf of humans.
“We will be in sort of symbiosis with those machines and we will start to trust them and work with them,” he predicted.
“This is the concern because we don’t know the rate of growth of machine intelligence, how clever those machines will become.”
– Control? –
Will the machines remain understandable for the humans? Will humans will be able to control them? Will they remain a benefit for humans, or pose harms?
These questions and more are being raised anew due to recent advances in robotic technology that allow machines to see and hear, almost like people.
Selman said investment in artificial intelligence in the United States was by far the highest ever in 2015, since the birth of the industry some 50 years ago.
Business giants like Google, Facebook, Microsoft and Tesla, run by billionaire Elon Musk, are at the head of the pack.
Also, the Pentagon has requested 19 billion for developing intelligent weapons systems.
What is concerning about these new technologies is their ability to analyze data and execute complex tasks.
This raises concerns about whether humans might one day lose control of the artificial intelligence they once built, said Selman.
It’s a concern that some of the world’s great minds have raised too, including British astrophysicist Stephen Hawking, who warned in a BBC interview in 2014 that the consequences could be dire.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said.
“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” he added.
“The development of full artificial intelligence could spell the end of the human race.”
These questions have led scientists to call for the establishment of an ethical framework for the development of artificial intelligence, as well as safeguards for security in the years to come.
Last year Musk — the owner of SpaceX — donated 10 million to resolve such concerns, deeming artificial intelligence potentially more dangerous than nuclear weapons.
For Wendel Wallach, an ethicist at Yale University, such dangers require a global response.
He also called for a presidential order declaring that lethal autonomous weapons systems are in violation of international humanitarian law.
“The basic idea is that there is a need for concerted action to keep technology a good servant and not let it become a dangerous master.”