How Tech Algorithms Become Infected With the Biases of Their Creators

How Tech Algorithms Become Infected With the Biases of Their Creators

Bias in artificial intelligence is everywhere. At one point, when you Googled “doctor,” the algorithm that powers Google’s namesake product returned 50 images of white men. But when biased algorithms are used by governments to dispatch police or surveil the public, it can become a matter of life and death.

On Tuesday, OneZero reported that Banjo CEO Damien Patton had associated with members of the Ku Klux Klan in his youth, and was involved in the shooting of a Tennessee synagogue.

Banjo’s product, which is marketed to law enforcement agencies, analyzes audio, video, and social media in real time, using artificial intelligence algorithms to determine what is worthy of police attention and what is not.

To what extent do the decisions of these types of algorithms reflect the conscious or unconscious biases of their creators?

The most common type of artificial intelligence used by tech companies today is called deep learning, which is a technique that analyzes data by breaking it down into smaller, simpler pieces and finds patterns among them. The bigger the datasets, the better an algorithm will be at recognizing patterns.

Say you’re developing a program to identify pets. If you’re a dog person, you might train the algorithm on a million pictures of dogs but only, say, 1,000 pictures of cats. The algorithm’s idea of what cats look like will ultimately be far less fully formed, increasing the likelihood that it will misidentify them. That, in a nutshell, is A.I. bias — poorly collected, or poorly designed, datasets that reflect human biases, and eventually impact real-world outcomes.

“Personal prejudices are all present in the room where choices about which systems get built and which don’t are made, which data is used… and how to determine whether the system is working well or not,” says Meredith Whittaker, cofounder of AI Now, a research institution that studies the societal impact of A.I.

Small choices, like which dataset is used to recognize a specific event to display on a crime dashboard, can scale into discriminatory practices.

One of the most infamous examples of bias in artificial intelligence emerged in 2015, when Google added a feature to Google Photos that sorted images based on what an algorithm thought was in them. The feature was mostly innocuous. It recognized dogs and flowers and buildings. But it also classified people with darker skin as gorillas.

“I understand HOW this happens; the problem is moreso [sic] on the WHY,” programmer Jacky Alcine, who first tweeted about the algorithm’s mistake, wrote on Twitter.

Google’s PR wrote that the company was “appalled” and issued what it called a temporary fix — the app no longer flagged photos under the “gorilla” tag. But the biased outcome was tough to disentangle. More than two years later, the search results for gorilla and monkey were still censored.

It’s unknown what caused the mistake.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

How the Walmart-Microsoft partnership builds on the four strategic themes for digital transformation

4 Aug, 2018

Walmart and Microsoft recently announced a five-year strategic partnership to further accelerate digital innovation in retail. This comes as a …

Read more

How machine learning can help environmental regulators

12 Apr, 2019

How to locate potentially polluting animal farms has long been a problem for environmental regulators. Now, Stanford scholars show how …

Read more

How Legal Tech AI Companies are Driving Customer Adoption

1 Oct, 2021

When Judge Andrew Peck published the legal opinion that rocked the eDiscovery-verse, Da Silva Moore,  over a decade ago I was …

Read more

Recent Jobs

IT Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Data Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Applications Developer

Washington D.C., DC, USA

1 May, 2024

Read More

D365 Business Analyst

South Bend, IN, USA

22 Apr, 2024

Read More

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.