Building Transparency into AI Projects
- by 7wData
As algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. Transparency is not an all-or-nothing proposition, however. Companies need to find the right balance with regards to how transparent to be with which stakeholders.
In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To “prove” it was human, the company trained the AI to insert “umms” and “ahhs” into its request: for instance, “When would I like the reservation? Ummm, 8 PM please.”
The backlash was immediate: journalists and citizens objected that people were being deceived into thinking they were interacting with another person, not a robot. People felt lied to.
The story is both a cautionary tale and a reminder: as algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. It’s easy to understand where this is coming from. Transparency is an essential element of earning the trust of consumers and clients in any domain. And when it comes to AI, transparency is not only about informing people when they are interacting with an AI, but also communicating with relevant stakeholders about why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired.
Seen in this light, and contrary to the assumptions about transparency by many organizations, transparency is not something that happens at the end of deploying a model when someone asks about it. Transparency is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between. Transparency is the systematic transference of knowledge from one stakeholder to another: the data collectors being transparent with data scientists about what data was collected and how it was collected and, in turn, data scientists being transparent with executives about why one model was chosen over another and the steps that were taken to mitigate bias, for instance.
As companies increasingly integrate and deploy AIs, they should to consider how to be transparent and what additional processes they might need to introduce. Here’s where companies can start.
While the overall goal of being transparent is to engender trust, it has at least four specific kinds of effects:
AI models are highly complex systems — they are designed, developed, and deployed in complex environments by a variety of stakeholders. This means that there is a lot of room for error and misuse. Poor communication between executives and the design team can lead to an AI being optimized for the wrong variable. If the product team doesn’t explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations.
Consider the case of an AI designed to read x-rays in search of cancerous tumors. The x-rays that the AI labelled as “positive” for tumors were then reviewed by doctors. The AI was introduced because, it was thought, the doctor can look at 40 AI-flagged x-rays with greater efficiency than 100 non-AI flagged x-rays.
Unfortunately, there was a communication breakdown.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More