What Do We Do About the Biases in AI?

What Do We Do About the Biases in AI?

Over the past few years, society has started to wrestle with just how much human biases can make their way into artificial intelligence systems—with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

What can CEOs and their top management teams do to lead the way on Bias and fairness? Among others, we see six essential steps:

First, business leaders will need to stay up to-date on this fast-moving field of research.

Second, when your business or organization is deploying AI, establish responsible processes that can mitigate Bias.

Consider using a portfolio of technical tools, as well as operational practices such as internal “red teams,” or third-party audits.

Third, engage in fact-based conversations around potential human biases. This could take the form of running algorithms alongside human decision makers, comparing results, and using “explainability techniques” that help pinpoint what led the model to reach a decision – in order to understand why there may be differences.

Fourth, consider how humans and machines can work together to mitigate bias, including with “human-in-the-loop” processes.

Fifth, invest more, provide more data, and take a multi-disciplinary approach in bias research (while respecting privacy) to continue advancing this field.

Finally, invest more in diversifying the AI field itself. A more diverse AI community would be better equipped to anticipate, review, and spot bias and engage communities affected.

Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems — with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

The problem is not entirely new. Back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and those with non-European names. However, the program had been developed to match human admissions decisions, doing so with 90 to 95 percent accuracy. What’s more, the school had a higher proportion of non-European students admitted than most other London medical schools. Using an algorithm didn’t cure biased human decision-making. But simply returning to human decision-makers would not solve the problem either.

Thirty years later, algorithms have grown considerably more complex, but we continue to face the same challenge. AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas. For example, as the investigative news site ProPublica has found, a criminal justice algorithm used in Broward Country, Florida, mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants. Other research has found that training natural language processing models on news articles can lead them to exhibit gender stereotypes.

Bias can creep into algorithms in several ways. AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes, for example.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

Create Real Value with Augmented (not Artificial) Intelligence

23 Oct, 2017

Ransomware is one of the fastest growing types of malware, and new breeds that escalate quickly ar As long as …

Read more

A Quick Overview of Data Engineering

22 Mar, 2022

Machine learning and artificial intelligence, which are at the top of the list of data science capabilities, aren’t just buzzwords; …

Read more

The 10 Deep Learning Methods AI Practitioners Need to Apply

18 Jun, 2019

Interest in machine learning has exploded over the past decade. You see machine learning in computer science programs, industry conferences, …

Read more

Recent Jobs

Senior Cloud Engineer (AWS, Snowflake)

Remote (United States (Nationwide))

9 May, 2024

Read More

IT Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Data Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Applications Developer

Washington D.C., DC, USA

1 May, 2024

Read More

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.