Groundbreaking AI researcher hopes for 'radically different' ideas from Toronto lab - Action News
Home WebMail Tuesday, November 19, 2024, 07:04 PM | Calgary | -8.3°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Groundbreaking AI researcher hopes for 'radically different' ideas from Toronto lab

Geoffrey Hinton wants to see the Vector Institute focus on new approaches to machine learning, rather than variations on what already exists.

One suggestion: Problem-solving software modelled on the sparse connections in the brain

Computer scientist Geoffrey Hinton is keen on developing new approaches to machine learning rather than merely building on existing techniques like those used by this popcorn-making robot. (Ingo Wagner/AFP/Getty Images)

One of the biggest names in machine learning research has high hopes for the Vector Institute, Toronto's new artificial intelligence research hub.

Geoffrey Hinton, considered one of the fathers of a popular branch of machine learning research called deep learning, is the institute's chief scientific adviser. He doesn't have any decision-making power it's just a volunteer position but he can suggest areas where the institute's researchers should focus their efforts. And he already has some ideas.

We should keep looking for big ideas that will make the current technology rather different. Geoffrey Hinton

"My main interest is in trying to find radically different kinds of neural nets," said Hinton in an interview with CBC News. He was referring to problem-solving software designed to simulate the connections between neurons in the human brain.

Put another way, Hinton is keen on seeing the Vector Institute develop big ideas that will help researchers use today's computing technology in new ways, rather than merely building on existing techniques.

"Everybody right now, they look at the current technology, and they think, 'OK, that's what artificial neural nets are,'" Hinton said. "And they don't realize how arbitrary it is. We just made it up! And there's no reason why we shouldn't make up something else."

The hope is that, by developing new types of neural networks that can be trained on ever increasing amounts of data, the performance of everything from self-driving cars to automated cancer screening could be dramatically improved.

'Outrageously large neural networks'

Hinton, a former University of Toronto professor, has worked at Google as an engineering fellow since 2013. He runs the recently formed Toronto outpost of the company's machine learning division, Google Brain.

Hinton says that, while existing neural networks work very well,they haven't changed much since he started work on them in1980s. That's where he sees an opportunity for researchers to shake things up with the development of "radical variations."

"The question is, can we make neural networks that are 1,000 times bigger? And how can we do that with existing computation?" Hinton said.

He offered one idea, from a recent paper on "outrageously large neural networks" published by researchers at Google Brain, though still under review.

Geoffrey Hinton's research has led to 'major advances' in artificial intelligence that can be applied to monitoring industrial plants for improved safety, creating better systems for voice recognition and reading bank cheques, a government news release said. (NSERC)

The cortex of a human brain boasts a vast network of sparsely connected neurons too complex to simulate with the computer hardware that exists today. So researchers have been working with much smaller and more densely connected artificial networks instead.

The problem is that these dense networks aren't as efficient at processing massive amounts of data as the sparse networks of the brain. Training a denseartificial neural network involves all of the network's neurons, whereas in thebrain, only a small fraction of neurons the ones most suited to a particular task are in use at any time.

And so Hinton and the Google Brain team have been working to simulate these sparse networks with a network of artificial neural networks that don't all have to be active at once.

Curiosity-driven research

In early research, the team has seen considerable improvements in language modelling and machine translation using the technique.

Hinton offers it as an example of the sortof curiosity-driven basic research he'd like to see the Vector Institute do the sort of blue-sky work that isn't always practical within many technology companies'application-driven research labs.

"My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively," Hinton said. "So there's lots of little ideas that you use to make things work better and exploit the current technology.

"But we should keep looking for big ideas that will make the current technology rather different."