How Tech Algorithms Become Infected With the Biases of Their Creators
- by 7wData
Bias in artificial intelligence is everywhere. At one point, when you Googled “doctor,” the algorithm that powers Google’s namesake product returned 50 images of white men. But when biased algorithms are used by governments to dispatch police or surveil the public, it can become a matter of life and death.
On Tuesday, OneZero reported that Banjo CEO Damien Patton had associated with members of the Ku Klux Klan in his youth, and was involved in the shooting of a Tennessee synagogue.
Banjo’s product, which is marketed to law enforcement agencies, analyzes audio, video, and social media in real time, using artificial intelligence algorithms to determine what is worthy of police attention and what is not.
To what extent do the decisions of these types of algorithms reflect the conscious or unconscious biases of their creators?
The most common type of artificial intelligence used by tech companies today is called deep learning, which is a technique that analyzes data by breaking it down into smaller, simpler pieces and finds patterns among them. The bigger the datasets, the better an algorithm will be at recognizing patterns.
Say you’re developing a program to identify pets. If you’re a dog person, you might train the algorithm on a million pictures of dogs but only, say, 1,000 pictures of cats. The algorithm’s idea of what cats look like will ultimately be far less fully formed, increasing the likelihood that it will misidentify them. That, in a nutshell, is A.I. bias — poorly collected, or poorly designed, datasets that reflect human biases, and eventually impact real-world outcomes.
“Personal prejudices are all present in the room where choices about which systems get built and which don’t are made, which data is used… and how to determine whether the system is working well or not,” says Meredith Whittaker, cofounder of AI Now, a research institution that studies the societal impact of A.I.
Small choices, like which dataset is used to recognize a specific event to display on a crime dashboard, can scale into discriminatory practices.
One of the most infamous examples of bias in artificial intelligence emerged in 2015, when Google added a feature to Google Photos that sorted images based on what an algorithm thought was in them. The feature was mostly innocuous. It recognized dogs and flowers and buildings. But it also classified people with darker skin as gorillas.
“I understand HOW this happens; the problem is moreso [sic] on the WHY,” programmer Jacky Alcine, who first tweeted about the algorithm’s mistake, wrote on Twitter.
Google’s PR wrote that the company was “appalled” and issued what it called a temporary fix — the app no longer flagged photos under the “gorilla” tag. But the biased outcome was tough to disentangle. More than two years later, the search results for gorilla and monkey were still censored.
It’s unknown what caused the mistake.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read MoreCategories
You Might Be Interested In
How the Walmart-Microsoft partnership builds on the four strategic themes for digital transformation
4 Aug, 2018Walmart and Microsoft recently announced a five-year strategic partnership to further accelerate digital innovation in retail. This comes as a …
How machine learning can help environmental regulators
12 Apr, 2019How to locate potentially polluting animal farms has long been a problem for environmental regulators. Now, Stanford scholars show how …
How Legal Tech AI Companies are Driving Customer Adoption
1 Oct, 2021When Judge Andrew Peck published the legal opinion that rocked the eDiscovery-verse, Da Silva Moore, over a decade ago I was …
Recent Jobs
Do You Want to Share Your Story?
Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.