As AI becomes more human-like, experts warn users must think more critically about its responses - Action News
Home WebMail Wednesday, November 13, 2024, 03:09 AM | Calgary | -1.0°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
BusinessAnalysis

As AI becomes more human-like, experts warn users must think more critically about its responses

Companies like OpenAI and Google are trying to dominate the quickly emerging market for AI systems where people can ask questions of a computer and get answers in the style of a human. But experts warn this could mean users must be more careful to verify the accuracy of AI responses.

Google, OpenAI announce new artificial intelligence systems in what's been called an 'arms race'

The Google logo is photographed at the Vivatech show in Paris, Thursday, June 15, 2023.
Google is promising that its search results will be informed by artificial intelligence in the U.S., with expansion to other countries to come. (Michel Euler/The Associated Press)

Tech giant Google has announced upgrades to its artificial intelligence technologies, just a day after rival OpenAI announced similar changes to its offerings,with both companies trying to dominate the quickly emerging market where human beings can ask questions of computer systems and get answers in the style of a human response.

It's part of a push to make AI systems such as ChatGPT not just faster, but more comprehensive in their responses right awaywithout having to ask multiple questions.

On Tuesday, Google demonstrated how AI responses would be merged with some results from its influential search engine.As part of its annual developers conference, Googlepromised that it would start to use AI to provide summaries to questions and searches, with at least some of them being labelled as AI at the top of the page.

Google'sAI generated summaries are only available in the U.S., for now but they will be written usingconversational language.

A logo is shown, reading OpenAi and displaying a circuit board.
OpenAI also recently announced updates to its flagship products that will allow conversational interactions between AI and human users. (Dado Ruvic/Reuters)

Meanwhile, OpenAI's newly announced GPT-4o system will becapable ofconversational responses in a more human-like voice.

It gained attention on Monday for being able to interact with users while employing natural conversation with very little delay at least in demonstration mode.OpenAI researchers showed off ChatGPT's new voice assistant capabilities, including using newvision and voice capabilities to talk a researcher through solving a math equation on a sheet of paper.

At one point, an OpenAI researcher told the chatbot he was in a great mood because he was demonstrating "how useful and amazing you are."

ChatGPT responded: "Oh stop it! You're making me blush!"

"It feels like AI from the movies,"OpenAI CEO Sam Altman wrote in a blog post."Talking to a computer has never felt really natural for me; now it does."

WATCH | OpenAI's GPT-4o speaksin anatural human tone:

OpenAI demonstrates new model's capability for realistic conversation

4 months ago
Duration 0:42
From giving advice and analyzing graphs to guiding someone through a math equation and even cracking a joke, the new model of ChatGPT, called GPT-4o, is touted as having real-time responses in natural human tone.

AI responses aren't always right

But researchers in the technology and artificial intelligence sector warn that as people get information from AI systems in more user-friendly ways,they also have to be careful to watch for inaccurate or misleading responses to their queries.

And because AI systems often don't disclose how they came to a conclusion because companies want to protect the trade secrets behind how they work, they also do nottend to show as many raw results or source data as traditional search engines.

This means, according to Richard Lachman, they can be more prone to providing answers that look or sound confident, even if they're incorrect.

A man with glasses and a beard in a black button up shirt, with collar undone, looks towards the camera in an office space.
Richard Lachman, who teaches at the RTA School of Media at Toronto Metropolitan University, says AI chatbots are now able to manipulate users into feeling 'more comfortable than you should be with the quality of the responses.' (Adam Carter/CBC)

Theassociate professor of Digital Media at Toronto Metropolitan University'sRTA School of Mediasays these changes are a response to what consumers demandwhen usinga search engine: a quick, definitive answerwhen they need a piece of information.

"We're not necessarily looking for 10 websites; we want an answer to a question. And this can do that," said Lachman,

However, hepoints out that when AI gives an answer to a question,it can be wrong.

Unlike more traditional search results where multiple links and sources are displayed in a long list, it's very difficult to parsethe source of an answer given by an AI system such as ChatGPT.

Lachman's perspective is that it might feel easier for people to trust a response from an AI chatbot if it's convincingly role-playing as a human by making jokes or simulating emotionsthat produce a sense of comfort.

"That makes you, maybe, more comfortable than you should be with the quality of responses that you're getting," he said.

A man in an unbuttoned pink collared shirt and grey blazer stands in an office space.
Duncan Mundell, who works with AI software company AltaML in Calgary, says there's enthusiasm for new AI technologies coming out. (Paula Duhatschek/CBC)

Business sees momentum in AI

Here in Canada, at least one business working in artificial intelligence is excited by a more human-like interface for AI systems like Google or OpenAI are pushing.

"Make no mistake, we are in a competitive arms race here with respect to generative AI and there is a huge amount of capital and innovation," saidDuncan Mundell,with Alberta-based AltaML.

"It just opens the door for additional capabilities that we can leverage," he said about artificial intelligence in a general sense, mentioningproducts his company creates with AI,such assoftware that can predict the movement of wildfires.

Hepointedout that while the technological upgrades are not revolutionary in his opinion, they move artificial intelligence in a direction he welcomes.

"What OpenAI has done with this release is bringing us one step closer to human cognition, right?" said Mundell.

Researcher calls sentient AI 'nonsense'

The upgrades to AI systems from Googleor OpenAImight remind science-fiction fans of the highly conversational computer onStar Trek: The Next Generation, but one researcher at Western University says he considers the new upgrades to be decorative, rather than truly changing how information is processed.

"A lot of the notable features of these new releases are, I guess you could say, bells and whistles," said Luke Stark, assistant professor at Western University'sFaculty of Information & Media Studies.

Luke Stark is an assistant professor at the Faculty of Information and Media Studies at Western University.
Luke Stark, an assistant professor at the Faculty of Information and Media Studies at Western University, says he considers the new AI upgrades to be mostly decorative and says they don't truly change how information is processed. (Submitted by Luke Stark)

"In terms of the capacities of these systems to actually go beyond what they've been able to do so far... this isn't that big of a leap," said Stark, who called the idea that a sentientartificial intelligencecould exist with today's technology"kind of nonsense."

The companies pushing artificial intelligence innovations make it hard to get clarity on "what these systems are good and not so good at," he said.

That's a position echoed by Lachman, who says that lack of clarity will require users to be savvy about what they read online in a new way because

"Right now, when you and I speak, I'm used to thinking anything that sounds like a person is a person," he said, pointing out that human users mayassume anything that seems likeanother human will have the same basic understanding of how the world works.

But even if a computer appears to look and sound likea human, it won't havethat knowledge, he says.

"It does not have that sense of common understanding of the basic rules of society. But it sounds like it does."

With files from The Associated Press, the CBC's Paula Duhatschek and Shawn Benjamin