Elon Musk has called for a 6-month pause on AI. This professor says it's not long enough - Action News
Home WebMail Tuesday, November 26, 2024, 05:55 AM | Calgary | -16.5°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
British ColumbiaQ&A

Elon Musk has called for a 6-month pause on AI. This professor says it's not long enough

Wendy Wong, political science professor at the University of British Columbia Okanagan, says the six-month pause on AI development that Elon Musk and more than 2,000 signatories are calling for aren't enough to set regulations on the technology.

'What's really important is acknowledging ... how AI is changing the human experience': Wendy Wong

A woman in blue dress stands with crossed arms in front of a blue door.
Political science professor Wendy Wong says moving forward to govern emerging technologies, like artificial intelligence, requires thinking about the values embedded in human rights. (Submitted by Wendy Wong)

Since its release last year, OpenAI's ChatGPT (Generative Pre-trained Transformer) program has prompted rivals to accelerate developing similar large language models, and companies to integrate generative artificial intelligence (AI) models into their products.

Earlier this month, the Microsoft-backed company unveiled the fourth iteration of ChatGPT, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

In light of this latest development, Elon Musk and a group of AIexperts and industry executives are calling for a six-month pause in developing systems more powerful than ChatGPT-4, in an open letter citing potential risks to society and humanity.

The signatories, more than 2,200 so far, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts. They also called on developers to work with policymakers on governance and regulatory authorities.

But Wendy Wong, a political scientist at University of British Columbia Okanagan campus who specializes in AI and human rights, argues that a six-month pause isn't long enough to make thosecalls happen.

Wong spoke to host Chris Walker on CBC's Daybreak South about why she thinks we need more time to put AI under control.

The following transcript has been edited for clarity and length.


The six-month pause, you argue, is not enough. Could you explain why?

When I saw that there had been this letter, my first reaction was positive I felt like it's good that these folks making AI recognizing AI is causing harm, and maybe we should take a break before we break our social and political institutions even further.

But then I looked at the list of signatories, and I looked at what they want us to do in these six months. Some of the things they want us to do in six months is to develop regulatory authorities to govern AI, which if we could do something like that in six months, I don't think we'd be here.

I'm also thinking if they are giving us such short timelines to develop auditing and certification, or to create well-resourced institutions to cope with economic and political disruption, what can we do right in response?

A man in black shirt and jacket speaks.
Elon Musk and more than 2,000 signatories called on artificial intelligence developers to work with policymakers, and governance and regulatory authorities during a six-month moratorium on AI development. (Susan Walsh/The Associated Press)

Your specialty is AI and human rights. Can you explain how you'd like human rights to fit into this conversation about what is essentially a robot?

What's really important is acknowledging and recognizing explicitly how AI is changing the human experience in fundamental ways.

We've done that a little bit here and there, but we can't really move forward on thinking about how to govern emerging technologies like AI without thinking about the values embedded in human rights.

Why I think that's important is linked to the idea that AI is really affecting human experience, so human rights are really appropriate some of the core values to human rights, such as autonomy or dignity, equality and community, are all things being disrupted by AI.

What do we need to build all these architectures of human rights?

If you look at the statement they released, it's almost as though robots will take over and eclipse humanity, and I just don't believe that the technology has that potential.

But between that point of eclipsing our civilization and this point, there are a lot of things that can be done. This is where governance really matters, and this is where actually all of us really matter.

To date we're often treated as data subjects, but we should be data stakeholders. We can actually put a claim on how these technologies develop and affect us.

A device showing the words 'Open AI' in front of a screen showing purple and green lines.
The ChatGPT interface as shown on a handheld device. Wendy Wong says the moratorium on AI development provides an opportunity to rethink the relationship between data and their human users. (Richard Drew/The Associated Press)

What are the dangers if this is not taken into account if AI develops unfettered?

One of the things that has come out from this statement is the recognition of companies not just as innovators and creators of technology, but in fact they're doing some governing, but through the merit of their choices, they are choosing to create AI.

These are choices they made for corporate reasons, perhaps for competitive reasons, but they have huge effects on how we live our daily lives.

If you could suggest a way that citizens could become more engaged with this, what would those ways be?

The onus is not on all of us the onus should actually be on governments and regulatory frameworks and corporations developing these technologies to help us all jump into this new age of AI.

There's a need for digital literacy, and this is part of what I would think of as a fundamental set of skills that everyone needs. It should be part of education now for everyone in the age of data and AI it's really trying to understand that when you get an output from an AI, you have to understand where that came from and why it's not magic.

AI can do really amazing things with sorting and producing data, but we need to really understand that they're doing these amazing things by actually sucking in massive data that we as human beings have created.

If we understand these are artifacts of who we are and not some sort of robotic reality, that is one of the first steps, and I think that is a literacy issue.

With files from Daybreak South and Thomson Reuters