Google reveals what machines 'dream' about in trippy photo series - Action News
Home WebMail Friday, November 22, 2024, 11:50 PM | Calgary | -11.3°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
News

Google reveals what machines 'dream' about in trippy photo series

Google's Artificial Neural Networks dream about many things most of them, a lot creepier than electric sheep.

Artificial Neural Networks produce creepy amalgamated images when they're given nothing else to do

Researchers asked Google's artificial neutral networks to interpret both simple images and random noise as specific things. Then, they let the machine 'dream' up anything based on its prior knowledge. Images like this one were the result. (Michael Tyka/Google Photos)

Do androids dream of electric sheep?

A report released last week byGoogle's research teamprovides new insight into the title question of Philip K. Dick's iconic 1968 novel minus all of the post-apocalyptic ethical quandaries and such.

Titled"Inceptionism: Going Deeper into Neural Networks," the buzzworthynewGoogle Research post explains that while artificial intelligence has come a long way in terms of image recognition, surprisingly little is known about why somemathematical methods and modelswork better than others.

To help illustrate this point, software engineersAlexander Mordvintsev,Mike Tyka andChristopherOlah lifted the curtain on how Google buildsitsartificial neural networks.

They also sharedexamples of imagesproduced by the networksin response to specific commands and, more interestingly, images produced without any specific instructions.

This is one of the many 'neural net 'dreams'' in Google's Inceptionism gallery, generated purely from random noise using a network trained on places by MIT Computer Science and AI Laboratory (Michael Tyka/Google Photos)

"We train networks by simply showing them many examples of what we want them to learn," reads the post. "[We hope] they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation)."

To test the machine's learning, researchers decided to "turnthe network upside down" by asking it nottofindorchooseimages of something specific, as it's been trained to do, but tocreateimages of things like bananas, ants and parachutes out of static based on its own existing knowledge.

They found that their machines had "quite a bit of the information needed to generate images too."

Google software engineers asked an artificial neural network to "enhance" an input image of random noise into several different objects. This was what it returned. (Google Research Blog)

While the engineers admit thatin some cases, their method revealed "the neural net isn't quite looking for the thing we thought it was," the resulting images are incredibly cool nonetheless.

What really caughtthe web's attention, however, were the photosproduced by the networks (which are based on the biological structure of human brains)without any sort of guidance.

Google's research team refers to the following images, generated purely from random noise, as neural net dreams:

As it happens, you're more likely to find psychedelic dog-birds in a pinball machine than electric sheep.

Many online have pointed to the high number ofweird animals found in these images.The researchers explained that this is because of the particular network they used at MIT's Computer Science and AI Laboratory to train the network.

"This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals," the report reads."But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features."

On the practical implications for this type of work,the research team wrote that, among other things, thetechnique could help them "understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training."

"The results are intriguing even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes," the research team wrote."It also makes us wonder whether neural networks could become a tool for artistsa new way to remix visual conceptsor perhaps even shed a little light on the roots of the creative process in general."