Hackers could use AI to automate attacks, crash cars and drones - Action News
Home WebMail Monday, November 11, 2024, 12:53 AM | Calgary | -0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Hackers could use AI to automate attacks, crash cars and drones

Rapid advances in artificial intelligence are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns.

AI could also generate images to impersonate others online and sway public opinion, report says

A man takes part in a hacking contest during the Def Con hacker convention in Las Vegas, Nevada, U.S. on July 29, 2017. A new report says AI could soon be used to mount automated hacking attacks. (Steve Marcus/Reuters)

Rapid advances in artificialintelligence are raising risks that malicious users will soonexploit the technology to mount automated hacking attacks,causedriverless car crashes or turn commercial drones into targetedweapons, a new report warns.

The study, published on Wednesday by 25 technical and publicpolicy researchers from Cambridge, Oxford and Yale universitiesalong with privacy and military experts, sounded the alarm forthe potential misuse of AI by rogue states, criminals andlone-wolf attackers.

The researchers said the malicious use of AI poses imminentthreats to digital, physical and political security by allowingfor large-scale, finely targeted, highly efficient attacks. Thestudy focuses on plausible developments within five years.

"We all agree there are a lot of positive applications ofAI," Miles Brundage, a research fellow at Oxford's Future ofHumanity Institute. "There was a gap in the literature aroundthe issue of malicious use."

Artificial intelligence, or AI, involves using computers toperform tasks normally requiring human intelligence, such astaking decisions or recognizing text, speech or visual images.

It is considered a powerful force for unlocking all mannerof technical possibilities but has become a focus of stridentdebate over whether the massive automation it enables couldresult in widespread unemployment and other social dislocations.

Hackers could use AI to cause driverless car crashes, the new report warns. (Syda Productions/Shutterstock)

The 98-page paper cautions that the cost of attacks may belowered by the use of AI to complete tasks that would otherwiserequire human labour and expertise. New attacks may arise thatwould be impractical for humans alone to develop or whichexploit the vulnerabilities of AI systems themselves.

It reviews a growing body of academic research about thesecurity risks posed by AI and calls on governments and policyand technical experts to collaborate and defuse these dangers.

Impersonation threat

The researchers detail the power of AI to generate syntheticimages, text and audio to impersonate others online, in order tosway public opinion, noting the threat that authoritarianregimes could deploy such technology.

The report makes a series of recommendations includingregulating AI as a dual-use military/commercial technology.

It also asks questions about whether academics and othersshould rein in what they publish or disclose about new
developments in AI until other experts in the field have achance to study and react to potential dangers they might pose.

"We ultimately ended up with a lot more questions thananswers," Brundage said.

The paper was born of a workshop in early 2017, and some ofits predictions essentially came true while it was beingwritten. The authors speculated AI could be used to createhighly realistic fake audio and video of public officials forpropaganda purposes.

Late last year, so-called "deepfake" pornographic videosbegan to surface online, with celebrity faces realisticallymelded to different bodies.

"It happened in the regime of pornography rather thanpropaganda," said Jack Clark, head of policy at OpenAI, thegroup founded by Tesla Inc CEO Elon Musk and SiliconValley investor Sam Altman to focus on friendly AI that benefitshumanity. "But nothing about deepfakes suggests it can't beapplied to propaganda."