Companies place ‘bias bounties’ on AI algorithms

Discussion
Photo: @Elisall via Twenty20
Nov 08, 2021

At least five large companies will introduce “bias bounties” or hacker competitions to identify bias in artificial intelligence (AI) algorithms, predicts the just-released “North American Predictions 2022” from Forrester.

Bias bounties are modeled on bug bounties, which reward hackers or coders (often, outside the organizations) who detect problems in security software. In late July, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer and younger faces.

“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” wrote Rumman Chowdhury, director of Twitter META, in a blog entry. “We want to change that.”

Coders have been unearthing biases in AI-driven algorithms on social media since a programmer in 2015 called out a search feature of the Google Photos app that mistakenly tagged photos of Black people as gorillas.

Twitter in May admitted its automatic cropping algorithm repeatedly cropped out Black faces in favor of white ones and favored men over women.

AI-biases can impact what advertisements or products an individual is shown online or the recommendations they receive on Netflix, but can also lead to prejudices in job hiring, loan applications, health care decisions and criminal intelligence.

Machine learning algorithms can pick up the covert or overt biases from their human developers. Biases are also often attributed to old data that’s already biased.

Companies using AI claim to be taking steps to use more representative training data and to regularly audit their systems to check for unintended bias and disparate impact against certain groups.

Forrester predicted that other major tech companies such as Google and Microsoft in 2022 will implement bias bounties, as will non-technology companies, such as banks and healthcare companies.

Wrote Forrester in its predictions report, “AI professionals should consider using bias bounties as a canary in the coal mine for when incomplete data or existing inequity may lead to discriminatory outcomes from AI systems. With trust high on the agenda of stakeholders, organizations will have to drive decision-making based on levers of trust such as accountability and integrity, making bias elimination ever more critical.”

DISCUSSION QUESTIONS: What are the pros and cons of using bounties to root out bias in artificial intelligence (AI) algorithms? Do you see any other newer actions that hold greater promise to reduce AI-bias?

Please practice The RetailWire Golden Rule when submitting your comments.
Braintrust
"With the 'it takes a village' mentality, crowdsourcing to make an algorithm better is a good idea since AI is not infallible."
"Finally – a ray of hope for digital interfaces!"
"Opening up algorithms and their inherent biases (because they are human creations) won’t solve all the problems, but it’s a great step in the right direction."

Join the Discussion!

9 Comments on "Companies place ‘bias bounties’ on AI algorithms"


Sort by:   newest | oldest | most voted
Liz Crawford
BrainTrust

Finally – a ray of hope for digital interfaces!

We have seen protections for AI in the Robots’ Bill of Rights – rethought as recently as 2019. And of course, we have all sorts of protections for companies. But now there may be long-needed initiatives to protect consumers from the invisible hand of Artificial Intelligence.

I could see these kinds of cases ultimately hitting the courts, with a seller or platform arguing that their algorithm is legal versus a representative of consumers claiming harm.

Gary Sankary
BrainTrust

On the surface this strikes me as a very innovative and speedy way to uncover potential issues with biases that are affecting specific populations. Having outsiders participate in finding these issues has to be more efficient that trying to do it with internal resources, who I suspect bring their own set of biases for their companies’ software to the problem.

Gene Detroyer
BrainTrust

Absolutely. How many times do you proof your own work and still miss some of the typos or word choices? They need new eyes and many eyes to take on the complexity of this challenge.

Suresh Chaganti
BrainTrust

Not many realize that AI that we see is based on decisions made by humans based on the training set they choose, assumptions they make, and conclusions they draw on their hypothesis. There are numerous decisions made by humans before AI-driven models make they way into production and are used by everyday users.

As such, there is a high chance of bias creeping in. It is absolutely concerning where people are directly impacted – recruitment, shaping the perspectives with recommended news and articles, etc.

Much like paying ethical hackers for finding bugs, paying a bounty to detect bias is welcome. It is a form of self-regulation and should be welcome.

DeAnn Campbell
BrainTrust

AI is designed to learn and emulate human behavior, so it stands to reason that the more human hands involved in shaping the algorithms the better. Bias is especially hard to suss out because it’s embedded in the hundreds of subconscious words and thoughts we’ve acquired over multiple generations. It’s going to take conscious awareness to root out these subtleties, and bounties will help build better digital intelligence for the good of all.

Jenn McMillen
BrainTrust

With the “it takes a village” mentality, crowdsourcing to make an algorithm better is a good idea since AI is not infallible.

Jeff Weidauer
BrainTrust

Opening up algorithms and their inherent biases (because they are human creations) won’t solve all the problems, but it’s a great step in the right direction.

Melissa Minkow
BrainTrust

I only see pros to using bias bounties – I just wish we could get to a place where they aren’t needed. Until then, this is an important measure to ensure that technology doesn’t perpetuate discrimination. These efforts are a prime example, like autonomous vehicles, of how technology can make the world safer by working against the inherent flaws of humans.

David Spear
BrainTrust

This is a great way to police potentially biased insights that are being delivered to enterprises and consumers. Moreover, as companies expand the use of AI-based algorithms for more of their business operations, I could see an entire cottage industry not only emerge, but also mature into money making entities in the future.

wpDiscuz
Braintrust
"With the 'it takes a village' mentality, crowdsourcing to make an algorithm better is a good idea since AI is not infallible."
"Finally – a ray of hope for digital interfaces!"
"Opening up algorithms and their inherent biases (because they are human creations) won’t solve all the problems, but it’s a great step in the right direction."

Take Our Instant Poll

How confident are you that bias bounties will reduce bias in artificial intelligence algorithms?

View Results

Loading ... Loading ...