AI could destroy humans, Stephen Hawking fears: Should you worry? - Action News
Home WebMail Monday, November 11, 2024, 12:32 AM | Calgary | -0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

AI could destroy humans, Stephen Hawking fears: Should you worry?

Superintelligent machines could destroy humankind, people such as physicist Stephen Hawking and Tesla Motors founder Elon Musk fear. Artificial intelligence experts say there are good reasons to pay attention and do something while there's still time.

Military, corporate leadership in artificial intelligence development raises concerns

Popular works of science fiction, such as the trailer for Terminator: Genesys envision that when machines become more intelligent than humans, they will destroy, enslave or assimilate us. (Paramount)

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people arestarting to take the subject very seriously. PhysicistStephen Hawking says,"the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably "our biggest existential threat."

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December, in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he hasamyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Musk raised thealarm about artificial intelligenceduring the MIT Aeronautics and Astronautics department's Centennial Symposium in October, likening AI to "summoning the demon."

Hehad previously tweeted that AI was "potentially more dangerous than nukes."

Physicist and best-selling author Stephen Hawking thinks 'the development of full artificial intelligence could spell the end of the human race.' (Ted S. Warren/Associated Press)

The event that Hawking and Musk fear is "singularity" when machines surpass humans in general intelligence, not just in beating us attasks like playing chess or Jeopardy, as they have already.

Popular works of science fiction -from the latest Terminator trailer, to the Matrix trilogy,to Star Trek's borg- envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the "possible misuse of powerful technologies" such as AI. He said Hawking and Musk have good reason to be concerned.

By the point when you sit down in front of your computer and your computer says, 'Good morning, I'm in charge now,' it's too late."- Robert J. Sawyer, author of WWW trilogy

"Once we no longer have the intellectual upper hand, then we quite literally, by definition, cannot outwit our successors. So unless we are absolutely sure that the machines we are building right now are not going to eventually become our new robot overlords, prudence is called for."

Alan Mackworth, who holds a Canada Research Chair in Artificial Intelligence at the University of British Columbia, thinks Hawking and Musk are being "a bit overdramatic," but are right to sound the alarmand spur public discussion.

He saysAI is just coming out of science fiction and into the real world,in the form of technologies such as Google's self-driving cars, IBM's Jeopardy-winning robot Watson, and the increasing number of computers successfully posing as humans in the Turing test (which examines a machine's ability to exhibit intelligent behaviour that can't be distinguished from that of a human, such as having a randomconversation with a person).

Mackworth invented the first soccer-playing robots. He is now developing AI technology for motorized wheelchairs to help people with dementia get around. He says machines are still far from being able to take off ontheir own: "If you look at what you can currently do in robot and computer learning, it's classifying Youtube videos to see which one has a cat in it and which one doesn't have a cat in it."

Military at forefront of AI development

But he is worried about the current use of AI to develop military technology, such as autonomous weapons and semi-autonomous drones.

"This technology is very, very powerful, and we have to build safeguards into it," he said.

Mackworth suggests that regulation of artificial intelligence may require international treaties and codes of ethics for robot designers, similar to those engineers must abide by.

Canadian science fiction author Robert J. Sawyer envisions newly conscious, superintelligent machines cooperating with humans in his Wake, Watch and Wonder trilogy. (Jim Ross/Canadian Press)

Enforcement, however, may not be that easy. It requires technology to verify what a robot can and cannot do, when compared to its specifications something that is under development but doesn't yet exist.

Sawyer thinks that in order to keep humans safe from the potential threats posed by AI, the technology's development needs to be out in the openin places like publicly funded universities, rather than inside military agencies.

"There should be nothing classified about this research," he said. "By the point when you sit down in front of your computer and your computer says, 'Good morning, I'm in charge now,' it's too late."

While that moment may be decades or even centuries away, Sandra Zilles, who holds a Canada Research Chair in Computational Learning Theory at the University of Regina, says machines are already able to learn some things much faster than humans, and can reprogram themselves to perform certain tasks more efficiently.

She notes that besides the military, big tech companies like Google and Apple are also at the forefront of AI research, and that too has implications.

"They can steer the development of technology in a direction that is most useful to them," she said, "but maybe not the most useful to mankind."

Collaborative machines?

Despite the dark future envisioned by science fiction, both Mackworth and Sawyer see brighter possibilities.

Mackworth says he's not really worried about machines turning on us, because humans typically design machines to be tools and extensions of our own minds and brains.

Tesla Motors Inc. CEO Elon Musk tweeted this summer that AI was 'potentially more dangerous than nukes.' (Lucy Nicholson/Reuters)

"We should make sure that these machines are built to collaborate with us and not be totally autonomous."

Sawyer envisions newly conscious, superintelligent machines cooperatingwith humans in his fictionalWake, Watch and Wonder trilogy. He argues that machines are developing in an environment that is very different than the scarcity and natural selection that led to the evolution of humans.

"All the things that made us basically nasty, rapacious, competitive as a species are not necessarily hard-coded into whatever passes for the DNA of artificial intelligence," Sawyer says. "There's every reason to think that they would be fundamentally different psychologically from us, and that psychology may very much predispose them to being altruistic rather than being competitive and violent the way we are."

That said, he's not ready to put all his money on his own vision.

"I don't want to say, 'Don't worry,' because one of us is right me or Stephen Hawking. Even I even I would probably bet on Hawking."