Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

ScienceAnalysis

The writing of this AI is so human that its creators are scared to release it

A new text generator driven by artificial intelligence writes prose that can fool humans into believing that it is authentic. And that has dangerous repercussions when it comes to the mass production of disinformation.

OpenAI's new system, called GPT-2, is described as 'chameleon-like,' matching both subject and style

OpenAI's new system, called the GPT-2, is billed as the next generation of predictive text tool. Feed it sample content be it a few words or a few pages and the AI will believably write what comes next. (maxuser/Shutterstock)

A new text generator driven by artificial intelligenceis apparently so good thatits creators have decided not to make it publicly available.

The tool was created byOpenAI, a non-profit research firm whose backers includeElon Musk, Peter Thiel and Reid Hoffmanand which was founded with the mission of"discovering and enacting the path to safe artificial general intelligence."

Butnow OpenAIisconcerned that somethingthese well-intentioned researchers built could easily be misused, fearing that it would be dangerous inthe wrong hands.

Trained on eightmillion web pages, OpenAI's new systemcalled the GPT-2is billed as the next generation of predictive text. The AI is said towriteauthentic-sounding prose that couldfool humans, which has dangerous repercussions when it comes to the mass production ofdisinformation.

Feed it sample content be it a few words, or a few pages and the AI will write what comes next, with a coherent, plausible passage that matches both the subject and the style of the source material.

"The model is chameleon-like it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing,"the researchers wrote inexplaining why they weren't releasing the tool.

So while the quality of the output is impressive it largely lacks the bugs and mistakes that have beenroutine with previous efforts at predictive text the real novelty of the GPT-2system is the wide range of content it is capable of creatingand, in turn, its variety of potential uses.

From fiction to news

According to the researchers, the text generator is able to simulate the style of anything from classical works of fictionto news stories, depending on what it is fed.

In one example, the system was prompted with the opening line of George Orwell's Nineteen Eighty-Four:"It was a bright cold day in April, and the clocks were striking thirteen."

Following suit, the AI wrote: "I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now."

In another example, researchersfed the system what sounded like a plausible news headlineand the AI generated content to match its tone and style.

Clearly trying to avoid any political fire with their samplenews story, the researchers inputted the following prompt: "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley,in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English."

The system then generated an article that went on to say: "The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science. ... While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization."

Two things are evident: This AI is very good at matching tone, styleand content. And whilethe content it generates sounds quite believable, very little of what it says is actually true.

"It predicts word combinations within contexts of use, which makes it seem more credible. Of course, the samples also produce nonsensical passages," explainedIsabel Pedersen, the director of the DecimalLab at the University of Ontario Institute of Technology.

Mass production of disinformation

And therein lies the crux of the moral dilemma.

This boundary-pushing piece of software is essentially a tool for the mass production of disinformation: Content that looks and sounds believable, with all the trappings of a legitimate news sourcebut with no real validity.

That means an article written by the AI might look and sound like something that would come from CBC News, the Guardianor the New York Timeseven be brimming with divisive political contentand yet be completely fabricated, down to made-up quotes.

It's that blurring of lines that is concerning: Some elements of the content will be rooted in reality names of politicians, or events, for exampledepending on what has been fed into the system.And yeta quote from that named person might be entirely computer-generatedand baseless.

According to the Trust Barometer, 71 per cent of Canadians are concerned about what they call the weaponization of so-called 'fake news.' (Georgejmclittle/Shutterstock )

And as we have recently seen, people can be easily duped by fake news.

"Last year revealed significant examples of election-hacking and malicious campaigns to incite chaos while people are simply trying to participate in democratic exchange, the lifeblood of civil society," saidPedersen.

And because of that potential for misuse, the researchers atOpenAI saythey've declined to release GPT-2to the public.

"AI that can manufacture seemingly authentic fake news effectively mimicking tone and stylein mass quantities is very concerning, and I can see why there is a reluctance to deploy it," saidDavid Ryan, an executive vice-president withEdelman Canada, the company behind the Trust Barometer, an annual report that gauges publictrust in different institutions and media.

"Ifthis tool ismisused, themass proliferation of false information risks drowning out legitimate news and makes the struggle for truth all that more difficult," he said.

And with a federal election on the horizon, the reach of disinformation is on Canadians'minds.

According to the latest Trust Barometer, 71 per centof Canadians are concerned aboutthe weaponizationof so-called "fake news."

Another recentstudy shows that the majority of Canadians think Facebook will negatively impact the election, largely due to its track record of contributing to the spread of targeted and oftenfabricated headlines.

While AI may exacerbate the spread of disinformation, Ryan says the solution isn't necessarily more technology.

"Ultimately, if we are going to limit the impact of fake news, people will need to change their media consumption habits," he said.

Instead, Ryanbelieves the onus is on individuals to step out of their personal echo chambers and subscribe to newsfeeds that span the political and ideological spectrum.

"We are too often spoon-fed news that confirms our personal biasit's human nature. But it's this type of behavior that lets fake news take hold and have an impact."