When algorithms go bad: Online failures show humans are still needed - Action News
Home WebMail Wednesday, November 13, 2024, 04:39 AM | Calgary | -1.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
ScienceAnalysis

When algorithms go bad: Online failures show humans are still needed

Over the past two weeks, there have been some serious fails with algorithms, the formulas or sets of rules used in digital decision-making processes. Now, people are questioning whether we're putting too much trust in the digital systems.

Disturbing events at Facebook, Instagram and Amazon reveal the importance of context

Sheryl Sandberg, Facebook chief operating officer, speaks in Washington in 2016. After some of the company's ad targeting was revealed recently as racist, Sandberg said it 'never intended or anticipated this functionality being used this way and that is on us.' (Associated Press)

Social media companies rely on algorithms to try to match their users with content that might interest them. But what happens when that process goes haywire?

Over the past two weeks, there have been some serious fails with algorithms,which are the formulas or sets of rules used in digital decision-making processes.Nowpeople are questioning whether we're putting too much trust in the digital systems.

As companies seek solutions, there's one clear standout: the algorithmsmaking the automated decisions that shape our online experiences require more human oversight.

The first case in a recent string of incidents involved Facebook's advertising back end, after it was revealed that people who bought ads on the social network were able to target them at self-described anti-Semites.

Disturbingly, the social media giant's ad-targeting tool allowed companies to show ads specifically to people whose Facebook profiles used language like "Jew hater" or "How to burn Jews."

If Facebook's racist ad-targeting weren't cause enough for concern, right on the heels of that investigation, Instagram was caught using a post that included a rape threat to promote itself.
A journalist makes a video of the Instagram logo. After a female Guardian reporter received a threatening email, she took a screen grab of the hateful message and posted it to her Instagram account. The image-sharing platform then turned it into an advertisement, targeted to her friends and family members. (Associated Press)

After a female Guardian reporter received a threatening email that read, "I will rape you before I kill you, you filthy whore!" she took a screen grab of the hateful message and posted it to her Instagram account. The image-sharing platform then turned the screen shot into an advertisement, targeted to her friends and family members.

Like to build a bomb?

And lest it seem social media companies are the only ones afflicted by this rash of algorithms gone rogue, it seems Amazon's recommendation engine may have been helping people buy bomb-making ingredients together.

Just as the online retailer's "frequently bought together" feature might suggest you purchase salt after you've put an order of pepper in your shopping cart, when users purchasedhousehold itemsused in homemade bomb building, the site suggested they might be interested in buyingother bombingredients.

So what do these mishaps have to do with algorithms?

The common element in all three incidents is that the decision-making was done by machines, highlighting the problems that can arise when major tech firms rely so heavily on automated systems.

On these free platforms, you and your data are often the product. Jenna Jacobson, Ryerson University postdoctoral fellow

"Driven by financial profit, many of the algorithms are operationalized to increase user engagement and improve user experience," says Jenna Jacobson, a postdoctoral fellow at Ryerson's Social Media Lab.

"On these free platforms, you and your data are often the product, which is why it makes financial sense for the platforms to create a personalized experience that keeps you the user engaged longer, contributing dataand staying happy."

The goal is to try to match users with content or ads based on their interests, in the hope of providing a more personalized experience or more useful information.

'Dependent on algorithms'

We've grown "dependent on algorithms to deliver relevant search results, the ability to intuit news stories or entertainment we might like," says Michael Geist, a professor at University of Ottawa and Canada Research Chair in internet and e-commerce law.

These formulas, or automated rule sets, have also become essential in managing the sheer quantity of posts, contentand users, as platforms like Facebook and Amazon have grown to mammoth global scales.
Amazon has over 300 million product pages on its U.S. site alone. Its recommendation engine may have been helping people buy bomb-making ingredients together. (Associated Press)

In the case of Amazon, which has over 300 million product pages on its U.S. site alone, algorithms are necessary to monitor and update recommendations effectively, because it's just too much content for humans to process, and stay on top of, on a daily basis.

But as Geist notes, the lack of transparency associated with these algorithmscan lead to the problematic scenarios we're witnessing.

Harder to sidestep criticism

In the case of Facebook's racist ad-targeting, it's not that the company has been accused of intentionally setting up an anti-Semitic demographic.

Rather, the concern is that lacking the right filters or contextual awareness, the algorithms that developed the list of targetable demographics based on people's self-described occupationsidentified "Jew haters" as a valid population groupingin direct conflict with company standards.

While the likes of Amazon, Facebook and Instagram have been able to talk in circles around similar issues,citing freedom of speechor leaning heavily on the fact that they're not responsible for posted content, with this latest wave of controversies it's harder to sidestep criticism.

An Amazon rep responded by saying, "In light of recent events, we are reviewing our website to ensure that all these products are presented in an appropriate manner."
Workers speak in front of a booth at a Facebook conference in San Jose, Calif., in April. The social media giant's ad-targeting tool allowed companies to show ads specifically to people whose Facebook profiles used language like 'Jew hater' or 'How to burn Jews.' (Associated Press)

Facebook's chief operating officer Sheryl Sandberg called their algorithmic mishapa fail on theirpart,adding they "never intended or anticipated this functionality being used this way and that is on us."That's a remarkable admission of their role in users' experiences on the site, given the social giant's long-standing hesitancy to take responsibility for how content is delivered on the platform.

The companies were also quick to state their commitment to fixing their algorithms, notably byadding more human oversight to their digitally managed processes.

And that is the punchline or perhaps the silver lining in allthese cases; at least at this stage, the only way to keep these algorithms in check is to have more humans working alongside them.

A philosophical shift

"I think the tide is changing in this area, with increased demands for algorithmic transparency and greater human involvement to avoid the problematic outcomes we've seen in recent weeks," says Geist.

But real change is going to require a philosophical shift.

Up to now, companies have focused on growth and scaling, and to accommodate their massive sizesthey have turned to algorithms.

As Jacobson notes, "algorithms do not exist in isolation," and as long as we rely solely on algorithmic oversight of things like ad targeting, ad placementand suggested purchases, we'll see more of these disturbing scenarios, because while algorithms might be good at managing decision-making on a massive scale, they lack the human understanding of context and nuance.