Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

BusinessAnalysis

Scientists must act now to make artificial intelligence benign: Don Pittis

Some of the world's cleverest people, including entrepreneur Elon Musk and physicist Stephen Hawking, have warned about the dangers of artificial intelligence. As computers get successively smarter, Don Pittis seeks expert opinion on how to make sure they stay on our side.

Elon Musk and Stephen Hawking say beware, but scientists want to make AI 'good'

The economic advantages of artificial intelligence mean progress towards electronic 'superintelligence' looks increasingly inevitable. Don Pittis says scientists must start now to be sure AI remains benign. (The Associated Press)

Reining in the growing power of artificial intelligence could be a matter of human survival.Thatsounds like over-the-top science fiction, but a growing number of ordinary computer scientists agreethatAI is nowunstoppable.

This week, a study from the market intelligence group Tracticasaid artificial intelligence is already swarming into the world of business andspending will be worth more than $40 billion in the coming decade. That may be an underestimate.

Some of the world's cleverest people, including Tesla andSpaceX boss Elon Musk and physicist Stephen Hawking,have warned us that artificial intelligencecould wipe humanity as we know itoff the face of the Earth. The question is:"What arewe going to do about it?"

Artificial intelligencemay be science fiction. But it is science fiction of the 1950s. According to award-winningCanadian AIpioneerJonathan Schaeffer, dean of science at the University of Alberta, most of usnow use artificial intelligence every day.

Invisible intelligence

"Artificial intelligence is ubiquitous," saysSchaeffer, whose Chinookcomputer program has been the world's reigning checkers champion since 1995. "It's very odd, because by and largepeople are using artificialintelligence daily and it's invisible to them."

He gives the example ofcredit card transactionswhere the artificial intelligencesystemlearns yourhabits and approves every normal transaction, but blocks the purchase of a car in China. Schaeffer, likeeven the most skeptical computer experts I contacted,saysthe incredible commercial potentialof artificial intelligence is one ofthe main reasons it will bealmost impossible to restrain.

He says the ultimategoal is what some inthe AIcommunitycall"superintelligence."

"Everybody, certainly in the community I work in, has this vision of creating intelligent entities,beingsthat we can communicate with, who can help us do the kinds of things that would improve our quality of life."

That transition, from a useful tool to a thinking autonomous superintelligence.is what has someresearchers worried, includingCory Butz,president of the Canadian Artificial Intelligence Association.

Scare tactic

"I really sort of dismissed the whole scare tactic aspect of the story up until a few years ago," says Butz, associate dean of research atthe University of Regina. "Now I can see it."

Arnold Schwarzenegger's Terminator movies offer what some computer scientists consider the most horrific example of how superintelligence could interact with humans. (Associated Press)

He says breakthroughs in something called"deep learning" by theUniversity ofToronto's GeoffHintonand University of Montreal's YoshuaBengioare what convinced him. Hinton and Bengiodivide their time between their respective universities and Google, which is well-known to be developing more commercialuses for artificial intelligence.

"These algorithms are very smart and they are only going to get better as people refine them," says Butz. And he says that means superintelligence is coming. "It's not like it's in the immediate future, say like in the next 10 years, but it definitely is coming down the road."

So howwill that superintelligence interact with humans?Most of the computer scientists I spoke to mentioned examples from science fiction, with Arnold Schwarzenegger's Terminator movies representing the most horrific example. But asU of A's Schaeffersays, any technology from biotechto nuclear physics can be used for "dark" purposes.

Military research

So far, our governments have not unleashed a global biological warfare plague or a nuclear Armageddon upon the world.

And just as we have treaties governing those two hazards, cynics, including some of the scientists I spoke to, say treaties will not stop governments from researching military artificial intelligence even if they wouldclaimit is "just in case the other guygets itfirst."

Just asworrying, according to experts attheCalifornia-basedMachine Intelligence Research Institute,is the superintelligent AIthat goesout of control. And whether theAIin question is military, commercial or created aspure science, that is what a group of researchers in the U.S. think we must urgently address.

Their paper,Aligning Superintelligence with Human Interests: A Technical Research Agenda, is one of a series of papers examining that very issue. As the title suggests, the paper's authors don't have all the answers. But they want to get the ball rolling. And while superintelligence may still be far off,they say we have to start now.

The MIRI researchers say AI may not hurt us intentionally. But without our moral values and shared history, their motives could be incomprehensible. For example, once given a problem to solve, they would have an incentive to "acquire resources being used by humanity."

Also, once launched, a self-guided artificial intelligencecould head in unpredictable directions, once again leading to human harm. That is why one of the early recommendations is the simplest. To have a reliableoff-switch.

Potential dangers

Nathalie Japkowicz is director of the Laboratory for Research on Machine Learning for Defence and Security at the University of Ottawa. Of the artificial intelligenceexperts I contacted, she was the most skeptical about the idea of some sort of independent and potentially maliciousmachine intelligence arising within the next50 years.

However, she believes too little is being done within the computer science community to research the potentialdangers of artificial intelligence. And she thinks computer scientists maynot be the best ones to be doing it, being too focused on technical issues.

Science fiction author Iain M. Banks created The Culture, arguably the most benign example of artificial intelligence. Researchers argue we must start now if we want AI to be 'good.' (Associated Press)

"The discussion shouldperhaps, instead, originate fromphilosophers of science or other social scientists who could then consult with AIexperts and involve them actively in the discussion," wrote Japkowicz in an email.

Of all the science fiction portrayals of artificial intelligence, perhaps the most benign is in the Culture series of books by theScottish author IainM. Banks,who died in 2013. The Culture universe, in our distant future, is dominated by superintelligentspaceships called "Minds," benevolent and wise, that at birth often given themselves humorous names.

Obviously Musk is a fan, as he has named two of his SpaceXcraftafter the Culturesuperintelligences, "Just Read The Instructions" and "Of Course I Still Love You,"that appear in Banksbook, The Player of Games.

But according to the researchers at MIRI,the creation of Banksianbenign intelligences,whetherdecades or millenniainto the future, may depend on steps we take now.

"By beginning our work early, we inevitablyface therisk that it may turn out to be irrelevant; yet failing to make preparations at all poses substantially larger risks."

See sidebar article on AI lessons from science fiction

Follow Don on Twitter @don_pittis

More analysisby DonPittis