Artificial intelligence needs to work with humans not replace us: tech experts | CBC Radio - Action News
Home WebMail Thursday, November 14, 2024, 04:38 PM | Calgary | 6.6°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Ideas

Artificial intelligence needs to work with humans not replace us: tech experts

Theres a lot of hope, hype and fear around artificial intelligence. That itll solve the climate crisis, or turn us all into paper clips. IDEAS host Nahlah Ayed speaks to two tech experts about the promise and perils of AI, as part of the Provocation Ideas Festival.

Don't buy all the hype or hope of AI, says tech writer Cory Doctorow

This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France.
ChatGPT, a language model-based chatbot, was launched in November 2022. Many people have been using this free AI system to automate tasks. But critics fear the powerful technology could be destructive without regulation. (Sebastien Bozon/AFP via Getty Images)


With ChatGPT, writing anything from resumes to thank-you notes, to even wedding vows, is a measure of how much artificial intelligence has become part of everyday life for millions of people.

Advocates of AI see the technology as the potential answer to humanity's biggest problems. But skeptics warn AI could create lasting damage to our society undermining education, eliminating jobs and perhaps civilization itself.

Technology writer and author of The Internet Con: How to Seize the Means of Computation, Cory Doctorow doesn't buy all the hypeor hope associated with AI.

"The inevitabalism of technology going from 1 to 60, and then staying sticking around forever. It's just not true we discard technologies all the time, including technologies that people like, right?"

Doctorow joinedVass Bednar, executive director of the Master of Public Policy in Digital Society Program at McMaster University for a panel discussionabout the promise and the perils of AI. The panel was moderated by IDEAS host Nahlah Ayed atthe Provocation Ideas Festival in Stratford, Ontario in November, 2023.

Here is an excerpt from their conversation.

So what does it mean when we say that we're building machines that are smarter than we are? What does it mean for a machine to be smarter than a human being?

CD: I think it means someone's lying, because given that we don't have a working definition of what smart means you know, describing the computer as more spiritual than you or smarter than you or any other adjective that we don't have a good empirical definition for is like an intrinsically unfalsifiable statement.

Saying that we have a machine that can automate things, that can spot things that humans can't spot, that can work with a human as a kind of partner to catch things that humans miss, that's fine. But remember when they say, 'Oh, we've got an algorithm that catches some of the mistakes that radiologists make when they look at your lung X-rays,' what they don't mean is the radiologist is going to spend as much time as they ever did looking at lung X-rays, and they're going to get a second opinion from the algorithm which has different blind spots to the human, and they'll compare notes. And if it turns out that they don't agree, there will be one fewer X-ray looked at that day because the oncologist or the radiologist can go back and look at the X-ray again just to resolve that disagreement.

Nobody is investing in AI in the hopes that radiologists will spend more money looking at X-rays than they do today. They're investing in AI in the hopes that they will fire half the radiologists and double the rate at which they look at X-rays. And I think that when you add up all of the consequential things that AI wants to automate in which they are both consequential and error-sensitive, such that you might be able to improve the outcome of the system by twinning or by pairing a human with AI, that the two of them work together with no cost savings, but an improvement in quality and in reliability, and you take those out because there's just no market for that stuff what you're left with is a very small number of applications for AI.

I have to admit a great amount of ignorance as to what is or isn't possible with artificial intelligence. I listened to a show recently where they talk about ChatGPT 4, which some researchers suggest can actually reason, can internalize an instruction to draw a unicorn when it's never seen a unicorn, or is able to pass the Turing test. Or the LSAT. How do I situate that in what you just said, Cory?

CD: Well, okay, so if it turns out that the standard test that we give to lawyers is something that a chatbot can answer, maybe we're not assessing our lawyers very well, right? In fact, I would fully support the idea that any assessment that we use to measure the merit of a human that a chatbot can outperform should be scrapped as a measure, and we should go to better qualitative measures that may be harder to assess in bulk, but which would produce a better picture of what people's aptitudes and interests really are.

Cory Doctorow and Vass Bednar
Tech experts Cory Doctorow (L) and Vass Bednar (R) discussed the potential dangers that AI poses and the need for regulation. Their discussion was part of the Provocation Ideas Festival. (Dominik Butzmann/Georgia Kirkos)

I speak as a Canadian expatriate who lives in the United States and whose daughter is going through the Common Core curriculum, where about a third of her instruction hours are spent teaching her how to take standardized tests. So I think that we can agree that standardized assessment tools that began as a way to evaluate the conduct of education and where it could stand with improvement have become targets themselves. And there's a law, when a metric becomes a target, it ceases to be useful as either.

I think it's really cool that with statistical inference you can figure out what a unicorn should look like even if you've never heard of a unicorn. That's great. But I don't know that it tells you that there's something intelligent about it. I think it tells you something really interesting about what the limits of statistical inference are that is novel and cool and interesting philosophically. I just don't think that it justifies a mass retooling of society around inference engines that we know are prone to all sorts of gaffes and where every thought of a gaffe coming in is hand-waved away with this idea of humans in the loop.

So Vass, help me understand then why pioneers in AI like Geoffrey Hinton or Yoshua Bengio as well as leading companies and researchers, have implored governments to regulate. What do their concerns, you know, boil down to when it comes to AI?

VB: I think we have this idea with artificial intelligence that if we regulate its production in a way that makes us all comfortable we feel that it's ethical or moral or that it's being properly built, then we can worry a little bit less about the application side, right? Because we're kind of trying to have two conversations at once. How do we build and then how can we use this?

Yoshua Bengio, the scientific director at the Mila Quebec AI Institute, poses for a portrait on July 4, 2023, in Montreal, Canada. Bengio is best known for his work in machine deep learning, artificial neural networks, and most recently, his call to action to regulate AI research.
Yoshua Bengio is best known for his work in machine deep learning, artificial neural networks, and more recently, his call to action to regulate AI research. (Andrej Ivanov/AFP/Getty Images)

In terms of speaking to why AI pioneers are worried and sort of raising these flags, perhaps it's because it's their job to be ambitious and to dream about how this could be used or should be used and will be widely adopted. You mentioned large language models and us playing around with ChatGPT. I mean, is this not just mass user testing? We're sort of letting people play around and learn from models in what ends up being a race amongst the largest companies to have the dominant model and who are really kind of stuck in this cyclical kind of talent, circulation, poaching kind of people back and forth to build what is again, looking for efficiency. This idea we're going to do things if we do things faster, we're going to be able to do them better. If we just know a little bit more, we can make a slightly better prediction. And I think Cory's elements of complementing human work and human thinking is imperative.

CD: There have been lots of critics before this current AI bubble who worried about automation bias and algorithmic bias. And when they criticize AI, when they do what is often called AI ethics, what they're saying is AI is not very powerful, right? AI can make a bunch of bad decisions quickly, so quickly that maybe we can't assess them. But that is not a mark of the quality and power of AI.

Meanwhile, AI boosters who talk about AI safety as distinct from AI ethics, which boils down to someday the chatbot is going to wake up and turn us all into paperclips. What the subtext of what they're saying is, AI is so powerful that it needs to be regulated and if it's that powerful, it's probably very valuable as well, right? A tool that is this powerful will someday transform our whole economy in every single way. Your firm should be figuring out how to integrate AI into its processes. Governments should be finding lots of ways to encourage AI investment, giving tax breaks, creating a regulatory framework for it.

And of course, let's not forget that as important as regulation is that when a monopolist or would-be monopolist seeks regulation, the regulation they're often seeking is something that would prevent new entry. So the monopoly's first preference is usually not to be regulated at all, but their close second preference is 'regulate me in a way that only I and not my competitors can satisfy.'

Is there any government in the world that either of you know of that is actually leading on regulation, specifically where AI is concerned?

VB: Canada has tried to be fast in terms of defining when and how [we are] going to use artificial intelligence as a government. And I think that kind of rapid prototyping is important to have these statements and disclosure how are we going to, who are we going to contract with on some of these issues. That's been very transparent. And it's almost like we're in an ideas competition because globally on what the frameworks should be, I use frameworks plural, but really because this stuff doesn't have a geography, we're sort of finding that kind of national, sub-national organizing principles don't work very well here.


Listen to the full conversation by downloading the CBC IDEAS podcast from your favourite app.


*Q&A edited for clarity and length.This episode was produced by Chris Wodskou.

Add some good to your morning and evening.

Subscribe to our newsletter to find out what's on, and what's coming up on Ideas, CBC Radio's premier program of contemporary thought.

...

The next issue of Ideas newsletter will soon be in your inbox.

Discover all CBC newsletters in theSubscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.