Can We Build a Trustworthy ‘AI’ While Models-As-A-Service (Maas) Is Projected To Take Over?

Can We Build a Trustworthy ‘AI’ While Models-As-A-Service (Maas) Is Projected To Take Over?

Regulators in few geographies have defined the components and frameworks but what does it take to create a ‘trustworthy AI’. What are the challenges we see in delivering it?

AI continues to be incorporated into everyday business processes, industries and use cases. However, one concern remains constant – the need to understand ‘AI’. Unless this happens, people won’t fully trust AI decisions.

The opacity of these systems, often referred to as ‘BlackBox AI’, raises several ethical, business and regulatory concerns, creating stumbling blocks to adopting AI/ ML, especially for mission-critical functions and in highly regulated industries. No matter how accurately the model makes predictions, unless there is clarity on what goes on inside the model, the question of trusting the model blindly will always be a valid concern for all stakeholders. So how does one get to trust AI?

AI decisions – To trust or not to trust

To trust any system, accuracy is never enough; justifications for prediction accuracy are just as important. A prediction can be accurate, but is it the correct prediction? This can be determined only if there are enough explanations and pieces of evidence to support the prediction. Let us understand this through an example from the healthcare industry.

The industry is considered one of the most exciting application fields for AI. It has intriguing applications in the areas of radiology, diagnosis recommendations/ personalization, and medicine discovery. Due to growing health complexities, data overload and shortage of experts, diagnosis and treatment of critical diseases have become complex. Although using AI/ ML solutions for such tasks can provide the best balance between prognosis ability and diagnosis scope, the significant problem of ‘explainability’ and ‘lack of trust’ remains. The model’s prediction will be accepted by all users if the model can provide strong supporting evidence and explanation behind the prediction that satisfies all users i.e., doctors, patients and governing bodies. Such explanations can help the physician judge whether the decision is reliable and create an effective dialogue between physicians and the AI models.

It’s more difficult to achieve Trustworthy AI, though. AI systems, by nature, are highly complex. The process of ideation, researching and testing the systems into production is hard, and keeping them in production is even harder. Model behaviours are different in training and production. If you need to trust the AI model, it cannot be brittle.

What are the critical components of trustworthy AI?

In machine learning, explainability refers to understanding and comprehending your model’s behaviour, from input to output. It resolves the “black box” issue by making models transparent. It should be noted that ‘transparency’ and ‘explainability’ and quite different. Transparency clarifies what data is being considered, which inputs provide outputs, and so on. Explainability covers a much larger scope – explaining technical aspects, demonstrating impact through a change in variables, how much weightage the inputs are given, etc.

For example, if the AI algorithm has predicted the prognosis for ‘cancer’ from the patient’s data provided, the doctor would need evidence and an explanation of the prognosis. Without which it merely acts as a non-reliable suggestion.

Much has been said about the opacity or black-box nature of AI algorithms. The solution should lay out the roles and responsibilities for running the ‘AI’ solution and thereby causation of the failure as well. Capturing such artifacts and registering them by records can provide a traceable and auditable in-depth lineage.

AI models in production can act differently as compared to training/test environments. And models do suffer from drift in data or goals. Even if the models are periodically retrained, there is no guarantee that the outputs will be consistent in production.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

New cloud-based machine learning tools offer programmatic approach to security

21 May, 2018

For years, many healthcare organizations tended to be skeptical and resistant (if not outright hostile) to the idea of storing …

Read more

Why Big Data Is Key For Better Infectious Disease Surveillance, Modeling

1 Dec, 2016

Although laboratory tests and other data collected by public health institutions have historically been the gold standard for infectious disease …

Read more

AI Can Outperform Doctors. So Why Don’t Patients Trust It?

31 Oct, 2019

The authors’ research indicates that patients are reluctant to use health care provided by medical artificial intelligence even when it …

Read more

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.