Dark Side of AI: How to Make Artificial Intelligence Trustworthy
- by 7wData
Security and privacy concerns are the top barriers to adoption of artificial intelligence, and for good reason. Both benign and malicious actors can threaten the performance, fairness, Security and privacy of AI models and data.
This isn’t something enterprises can ignore as AI becomes more mainstream and promises them an array of benefits. In fact, on the recent Gartner Hype Cycle for Emerging Technologies, 2020, more than a third of the technologies listed were related to AI.
At the same time, AI also has a dark side that often goes unaddressed, especially since the current machine learning and AI platform market has not come up with consistent nor comprehensive tooling to defend organizations. This means organizations are on their own. What’s worse is that according to a Gartner survey, consumers believe that it is the Organization using or providing AI that should be accountable when it goes wrong.
It is in every Organization’s interest to implement security measures that counter threats in order to protect AI investments. Threats and attacks against AI not only compromise AI model security and data security, but also compromise model performance and outcomes.
There are two ways that criminals commonly attack AI and actions that technical professionals can take to mitigate such threats, but first let’s explore the three core risks to AI.
Organizations that use AI are subject to three types of risks. Security risks are rising as AI becomes more prevalent and embedded into critical enterprise operations. There might be a bug in the AI model of a self-driving car that leads to a fatal accident, for instance.
Liability risks are increasing as decisions affecting customers are increasingly driven by AI models using sensitive customer data. As an example, incorrect AI credit scoring can hinder consumers from securing loans, resulting in both financial and reputational losses.
Social risks are increasing as “irresponsible AI” causes adverse and unfair consequences for consumers by making biased decisions that are neither transparent nor readily understood. Even slight biases can result in the significant misbehavior of algorithms.
The above risks can result from the two common ways that criminals attack AI:Malicious inputs, or perturbations and query attacks.
Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a “new” form of phishing.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More