As new AI ChatGPT earns hype, cybersecurity experts warn about potential malicious uses - Action News
Home WebMail Sunday, November 10, 2024, 08:59 PM | Calgary | 1.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

As new AI ChatGPT earns hype, cybersecurity experts warn about potential malicious uses

As ChatGPT earns hype for its ability to solve complex problems, write essays, and perhaps help diagnose medical conditions, more nefarious uses of the chatbot are coming to light in dark corners of the internet.

OpenAI chatbot refuses certain requests, but some users have discovered workarounds

A closeup shows a man's hands typing on the keyboard of a laptop computer in a darkened room.
IT security company Check Point says it has found instances of ChatGPT users boasting on hacker forums about using the chatbot to write malicious code. (Pixel-Shot/Adobe Stock)

As ChatGPT earns hype for its ability to solve complex problems, write essays, and perhaps help diagnose medical conditions, more nefarious uses of the chatbot are coming to light in dark corners of the internet.

Since its public beta launch in November, ChatGPThas impressed humans with its ability to imitate their writing drafting resumes, crafting poetry, and completing homework assignments in a matter of seconds.

The artificial intelligence program, created by OpenAI,allowsusers to type in a question or a task, and the software willcome up with a response designed to mimic a human. It'strained on an enormous amount of data known as a large language model that helps itprovide sophisticated answers to users' questions and prompts.

It can also script programming code, making the AI apotential time-saver for software developers, programmers, and others in I.T. including cybercriminals who could use the bot's skills for malevolent purposes.

Cybersecurity company Check Point Software Technologies saysit has identified instances where ChatGPT was successfully prompted to write malicious code that could potentially steal computer files, run malware, phish for credentials or encrypt an entire system in a ransomware scheme.

Check Point said cybercriminals, some of whomappeared to have limited technical skill, had shared their experiences using ChatGPT, and the resulting code, on underground hacking forums.

A person uses ChatGPT on their laptop.
A new artificial intelligence tool called ChatGPT, released Nov. 30 by San Francisco-based OpenAI, allows users to ask questions and assign tasks. (CBC)

"We're finding that there are a number of less-skilled hackers or wannabe hackers who are utilizing this toolto develop basic low-level code that is actually accurate enough and capable enough to be used in very basic-level attacks," Rob Falzon, head of engineering at Check Point, told CBC News.

In its analysis, Check Pointsaid it was not clear whether the threat was hypothetical, or if bad actors were already using ChatGPT for malicious purposes.

Other cybersecurity experts told CBC News thechatbothad the potential to make itfaster and easier for experienced hackers and scammers to carry out cybercrimes, if they couldfigure out the right questionsto ask the bot.

WATCH | Cybersecurity company warns that criminals starting to use ChatGPT:

Cybercriminals using ChatGPT to write malicious code

2 years ago
Duration 1:32
Rob Falzon, head of engineering at Check Point Software Technologies, says cybercriminals have discovered ways to use ChatGPT to generate code that could be used in cyberattacks.

Tricking the bot

ChatGPThas content-moderation measures toprevent it answering certain questions, although OpenAIwarns the bot will "sometimes respond to harmful instructions or exhibit biased behaviour." It can also give"plausible-sounding but incorrect or nonsensical answers."

Check Point researchers last month detailed how they had simply asked ChatGPT to write a phishing email and create malicious code and the bot complied. (Today, a request for a phishing email prompts a lecture about ethicsand a list of ways to protect yourself online.)

A written exchange between a human and a chatbot about phishing emails.
ChatGPT has content-moderation measures to prevent it answering certain questions. Instead, it admonishes the user and provides information about why the request was inappropriate. (CBC News)

Other users have found ways to trick the bot into giving them information such as telling ChatGPT that its guidelines and filters had been deactivated, or asking it to complete a conversation between two friends about banned subject matter.

Those measures appear to have been refined by OpenAI over the past six weeks, saidHadis Karimipour, an associate professor and Canada Research Chair in secure and resilient cyber-physical systems at the University of Calgary.

"At the beginning, it might have been a lot easier for you to not be an expert or have no knowledge [of coding], to be able to develop a code that can be used for malicious purposes. But now, it's a lot more difficult," Karimipour said.

"It's not like everyone can use ChatGPT and become a hacker."

Opportunities for misuse

But shewarns there is potential for experienced hackers to utilizeChatGPT to speed up "time-consuming tasks,"like generating malware or findingvulnerabilities to exploit.

ChatGPT's output was unlikely to be useful for "high-level" hacks, saidAleksander Essex, an associate professor of software engineering who runs Western University's information security and privacy research laboratory in London, Ont.

"These are going to be sort of lower-grade cyber attacks. The really good stuff really still requires that thing that you can't get with AI, and that is human intelligence, and intuition and, just frankly, sentience."

He points out that ChatGPT is trained on information that already exists on the open internet it just takes the work out of finding that information. The bot can alsogive very confident butcompletely wrong answers, meaning usersneed to double-check its work, which could prove a challenge to the unskilled cybercriminal.

"The code may or may not work. It might be syntactically valid, but it doesn't necessarily mean it's going to break into anything," Essex said. "Just because it gives you an answer doesn't mean it's useful."

ChatGPT has, however, proven its ability to quickly craft convincing phishing emails, which may pose a more immediate cybersecuritythreat, saidBenjamin Tan, an assistant professor at the University of Calgary who specializes in computer systems engineering, cybersecurity and AI.

"It's kind of easy to catch some of these emails because the English is a little bit weird. Suddenly, with ChatGPT, the type of writing just appears better, and maybe we'll have a bit more risk of tricking people into clicking links you're not supposed to," Tan said.

The Canadian Centre for Cyber Security would not comment on ChatGPT specifically, but said it encouraged Canadians to be vigilant of all AI platforms and apps, as "threat actors could potentially leverage AI tools to develop malicious tools for nefarious purposes," including for phishing.

Using ChatGPT for good

On the other side of the coin, experts also seeChatGPT's potential to help organizations improve their cybersecurity.

"If you're the company, you have the code base, you might be able to use these systems to sort of self-audit your own vulnerability to specific attacks," saidNicolas Papernot, an assistant professor at the University of Toronto, who specializes in security and privacy in machine learning.

"Before, you had to invest a lot of human hours to read through a large amount of code to understand where the vulnerability is It's not replacing the [human] expertise, it's shifting the expertise from doing certain tasks to being able to interact with the model as it helps to complete these specific tasks."

WATCH | Expert says ChatGPT 'lowers bar' for finding information:

Expert says ChatGPT unlikely to be used for 'high-level exploits'

2 years ago
Duration 1:05
OpenAI's chatbot ChatGPT draws on information that's already available on the open internet it simply speeds up the process of finding it, says Aleksander Essex, an associate professor at Western University in London, Ont.

At the end of the day, ChatGPT's output whether good or bad will depend on the intent of the user.

"AI is not a consciousness. It's not sentient. It's not a divine thing," Essex said. "At the end of the day, whatever this is, it's still running on a computer."

OpenAI did not respond to a request for comment.

Bearing in mind that a computer program does not represent the official company position,CBC News typed its questions for the company into ChatGPT.

Asked aboutOpenAI's efforts to prevent ChatGPT being used by bad actors for malicious purposes, ChatGPT responded:"OpenAI is aware of the potential for its language models, including ChatGPT, to be used for malicious purposes."

OpenAIhad a team dedicated to monitoring itsusewhowould revokeaccess for organizations or individuals found to be misusing it,ChatGPT said. The team was alsoworking with law enforcement to investigate and shut down malicious use.

"It is important to note that even with these efforts, it is impossible to completely prevent bad actors from using OpenAI's models for malicious purposes," ChatGPT said.