Artificial intelligence poses 'risk of extinction,' tech execs and experts warn - Action News
Home WebMail Monday, November 11, 2024, 12:57 AM | Calgary | -0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
World

Artificial intelligence poses 'risk of extinction,' tech execs and experts warn

More than 350 top executives and researchers in artificial intelligence have signed a statement urging policymakers to seethe serious risks posed by unregulated AI, warning the future of humanity may be at stake.

More than 350 industry leaders sign letter equating potential AI risks with pandemics, nuclear war

A man wearing a blue suit sits at a table, on the right side of the frame, looking toward the left side of the frame.
OpenAI CEO Sam Altman, seen speaking at a U.S. Senate subcommittee meeting in Washington, D.C., on May 16, was among more than 350 artificial intelligence industry leaders and researchers who raised concerns about AI technology in an open letter published Tuesday. (Patrick Semansky/The Associated Press)

More than 350 top executives and researchers in artificial intelligence have signeda statement urging policymakers to seethe serious risks posed by unregulated AI, warning the future of humanity may be at stake.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the signatories, includingOpenAI CEO Sam Altman, saidin a 23-word letter published Tuesday by the nonprofit Center for AI Safety (CAIS).

Competition in the industry has led to a sort of "AI arms race," CAISexecutive director Dan Hendrycks told CBC News in an interview.

"That could escalate and, like the nuclear arms race, potentially bring us to the brink of catastrophe," he said, suggesting humanity "could go the way of the Neanderthals."

Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with "smart machines" thinking for themselves.

"There are many ways that [AI] could go wrong," said Hendrycks. He believes there isaneed to examine which AI tools may be used for generic purposes and whichcould be used with malicious intent.

He also raised the concern ofartificial intelligence developingautonomously.

"It would be difficult to tell if an AI had a goal different from our own because it could potentially conceal it. This is not completely out of the question," he said.

WATCH| OpenAI CEO Altman presses U.S. lawmakers on regulation:

ChatGPT boss urges U.S. to set rules for artificial intelligence

1 year ago
Duration 2:04
OpenAI CEO Sam Altman urged lawmakers in Washington to regulate the burgeoning field of artificial intelligence before its too late. Altman, whose company created the free chatbot tool ChatGPT, is the latest AI pioneer to warn about the potential dangers of a technology some fear could soon surpass human intelligence.

'Godfathers of AI' among critics

Hendrycks and thesignatories to the CAISstatementare calling for internationalco-operationto treat AI as a "global priority" in order to address its risks.

And you don't have to havebe an expert or even have an interest in artificial intelligence to be affected by it going forward, said technology analyst and journalistCarmi Levy.

"Just like climate change, even if you're not a meteorologist, it's going to touch your life," Levysaid, citing the relationships between governments and citizens, financial markets and organizational development. "AI is going to touch all of our lives."

WATCH | No one will escape the effects of AI, says Levy:

AI will affect your life even if you don't use it, expert warns

1 year ago
Duration 0:49
Just because you don't use an app like ChatGPT doesn't mean your life won't be impacted by AI, says technology analyst Carmi Levy.

The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.

As well as Altman, signatoriesincluded the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.

Also among them were British-Canadian computer scientistGeoffrey Hinton and Universit de Montral computer science professorYoshua Bengio two of the three so-called "godfathers of AI" who received the 2018 Turing Award for their work on deep learning. Professors from institutions ranging from Harvard to China's Tsinghua University also signed on.

WATCH|Canadian-British AIpioneer Geoffrey Hinton on potential risks:

He helped create AI. Now hes worried it will destroy humanity

1 year ago
Duration 8:08
Canadian-British artificial intelligence pioneer Geoffrey Hinton says he left Google because of recent discoveries about AI that made him realize it poses a threat to humanity. CBC chief correspondent Adrienne Arsenault talks to the 'godfather of AI' about the risks involved and if there's any way to avoid them.

AI development has reached a milestone, known as theTuring Test, which means machines have the ability to converse with humans in a sophisticated fashion, YoshuaBengio told CBC News.

The idea that machines can converse with us, and humans don't realize they are talking to an AI system rather than another person, is scary, he added.

Bengio worries the technology could lead to an automation of trolls on social media, as AI systems have already "mastered enough knowledge to pass as human."

"We are creating intelligent entities," he said.AI systems aren't as smart as humans on everything "right now" but that couldchange, Bengiocontinued.

"Are they going to behave well with us? There are a lot of questions that are very, very concerning and there's too much unknown."

WATCH | The stakes are too high to ignore rapid advances in AI:

Too many 'unknowns' about machines that can converse with us, says professor

1 year ago
Duration 3:07
Universit de Montral computer science professor Yoshua Bengio says the advent of machines that can converse with us is both scary and exciting, and the stakes are 'just too high' to ignore questions about how humans will interact with them.

A statement from CAIScriticized Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.

Bengio andElon Musk, along with more than 1,000 otherexperts and industry executives,hadalready cited potential risks to society in April.

Last week,Altman referred to EU AI the first efforts to create a regulation for AI as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.

European Commission president Ursula von der Leyen will meet Altman on Thursday.

AI regulation 'still playing catch up'

Not everyonebelieves AI is existential threat, yet.

"I think there are incredibly pressing practical ramifications of AI that affect people negatively,that I think we don't yet have good solutions for," said RahulKrishnan, an assistant professor at the University of Toronto's department of computer science.

He believes thereis a need for "responsible AI," which includes "having a set of principle that users and developers of machine learning models agree on."

Krishnan said AI regulationis "still playing catch up" but there needs to be a "good balance" to ensure technologies are developed and used safely without hindering improvements.

WATCH| Could AI already be outsmarting humans:

Is AI moving too fast?

1 year ago
Duration 24:37
April 11, 2023 | There is growing concern that AI models could already be outsmarting humans. Some experts are calling for a 6-month pause. Also: how scammers can use 'deep voice' AI technology to trick you. Plus the phony AI images that went viral.

However, he sees the potential for "biases" to affect how machine learning algorithmsare programmed.

Heofferedthe example of AI being used to determine who shouldbe approved for a credit card.If anAI tool is trainedto work with data about past lending decisionsthat already "have a degree of bias," he said the algorithm could further perpetuate that bias in its predictions.

Luke Stark, who studiesthe social, ethical and cultural impacts of AI at Western University in London, Ont., agreed.If the dataAI systems are using exhibithistorical bias around race or gender, it's going to get exacerbated, built up and further expressed through the system, Stark said.

"I think it's a real danger that we're facing today and that's already affecting marginalized communities. You know, people in society who often have the least say about how computing works and how computers are designed," he said.

WATCH | Those most adversely affected by AI should be helping to create AI policy:

Marginalized voices must be heard when creating AI policy, professor says

1 year ago
Duration 0:47
Western University assistant professor Luke Stark says members of communities most adversely affected by artificial intelligence bias should be at the centre of discussions about creating AI policy.

Stark, however, believes the warnings about an existential threat from AI have gone overboard, at least for now.

"From my perspective, it's these everyday real-world, real-life cases of contemporary AI systems being used to control different groups in society that are not getting as much attention."

With files from Anand Ram, Anis Heydari, Meegan Read, Reuters