New tool aims to harness the power of AI to combat internet hate against Indigenous people - Action News
Home WebMail Sunday, November 10, 2024, 11:09 PM | Calgary | 0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Edmonton

New tool aims to harness the power of AI to combat internet hate against Indigenous people

We're trying to make the internet a kinder place. We're trying to change the trajectory of the internet towards discriminated people, said Shani Gwin, founder of pipikwan phtkwan.

While AI can generate hate, it can also prevent it, experts say

Hands type at a computer with a cup of coffee
Researchers are developing a Grammarly-like tool to help combat bias, racism, and hate towards Indigenous people online. (Emily Williams/CBC)

A new tool aims to use artificial intelligence to help make the internet a safer place for Indigenous people.

The project was given the name wasikan kisewatisiwin, which translates to "kind energy" in Cree.

"We're trying to make the internet a kinder place. We're trying to change the trajectory of the internet towards discriminated people," Shani Gwintold CBC's Radio Active.

Gwin is a Mtisentrepreneur andfounder of pipikwan phtkwan, the Edmonton-based Indigenous communications firm leading the project.

Being developed in collaboration with the Alberta Machine Intelligence Institute(AMii), the tool is dual purpose, intended to help both Indigenous people and non-Indigenous Canadians reduce racism, hate speech, and online bias.

The first function of the program is to moderate online spaces like comment sections. While the internet has been a tool used by Indigenous people for advocacy, it also can frequently be an unsafe space for communities that are discriminated against, Gwin said.

Gwin said that all it takes is one comment for online spaces to fester.

The tool flags hateful comments, and then provides sample responses, while also documenting these instances for future reporting.

The second function of the tool is designed to serve as a writing plug-in for your computer similar to Grammarly. Intended to help general Canadians understand their bias, it will flag any writing that may be biased against Indigenous people, provide an explanation, and a suggestion for how to reword the sentence.

Woman with long dark hair curled up in an armchair
Shani Gwin is the founder of pipikwan pehtakwan, an Indigenous public relations agency focused on elevating Indigenous voices, projects and issues. (Submitted by Amii)

Ayman Qroon, an associate machine learning scientist with Amii, explained that the system works a lot like the AI chatbot tool ChatGPT. It is advanced computer software that is trained to understand and generate human language.

"You can think of it as like teaching a child by showing them thousands of books and articles and blogs. And they eventually end up understanding the language and the knowledge embedded in that language."

Qroon then instructs the language model to classify a comment as hate speech, or not and provide rationale as to why.

Bias feeds bias

ButAI-powered content can also generatehate and disinformation.

"AI right now is designed through the lens of Canada's dominant culture. And I would say that across the world that without input from racialized communities, including Indigenous people, AI cannot analyze and produce culturally safe and respectful content," Gwin said.

"Every piece of infrastructure in Canada has been developed from the white patriarchal lens," she said. "So more racialized people, more women need to get involved in the development of AI so that it doesn't continue to be built in a way that's going to harm us again."

Man with dark hair smiles, leaning against a tall table.
Ayman Qroon, is an associate machine learning scientist at Amii working on developing wasikan kisewatisiwin. (Emily Williams/CBC)

Qroon said he is glad to see people questioning the underlying biases that may exist in AI.

"That means that we care and we're thinking about the problem."

"The truth is, these models just look to learn from the data that you show them. If the internet is biased, it will learn to be biased if there is hate speech there, it will learn that as well."

AI bias revealed itself in training, Qroon said, adding that at times when experimenting with the AI, it would try to minimize the tragedies that Indigenous people went through.

"And that's why it was very important for us to integrate the Indigenous community into this process and get their perspective and get the instructions from them."

A Mtis entrepreneur in Edmonton wanted to find out if artificial intelligence could be used to help to make the online experience better for Indigenous people. Shani Gwin is the founder of pipikwan pehtakwan.

The project has been selected as a semi-finalist for Massachusetts Institute of Technology's Solve 2024 Indigenous Communities Fellowship.

Gwin said that her hope for the project is that it helps take the emotional labour of education off Indigenous people and free them up to do things besides moderating comment sections.

"I think there might be concerns that people think that this AI tool will take jobs away from Indigenous people, but it's not, that's not what it's for. It's there to do the work that we don't want to do."

"But it also means changing the internet and Canadians' hearts and minds about who Indigenous people are."