Why, Robot? Understanding AI ethics

Why

Not many people know that Isaac Asimov didn’t originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story*. Robots mustn’t do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesn’t contravene laws one and two.

75 years on, we’re still mulling that future. Asimov’s rules seem more focused on “strong AI” – the kind of AI you’d find in HAL, but not in an Amazon Echo. Strong AI mimics the human brain, much like an evolving child, until it becomes sentient and can handle any problem you throw at it, as a human would. That’s still a long way off, if it ever comes to pass.

Instead, today we’re dealing with narrow AI, in which algorithms cope with constrained tasks. It recognises faces, understands that you just asked what the weather will be like tomorrow, or tries to predict whether you should give someone a loan or not.

Making rules for this kind of AI is quite difficult enough to be getting on with for now, though, says Jonathan M. Smith. He’s a member of the Association for Computing Machinery, and a professor of computer science at the University of Pennsylvania, says there’s still plenty of ethics to unpack at this level.

“The shorter-term issues are very important because they’re at the boundary of technology and policy,” he says. “You don’t want the fact that someone has an AI making decisions to escape, avoid or divert past decisions that we made in the social or political space about how we run our society.”

There are some thorny problems already emerging, whether real or imagined. One of them is a variation on the trolley problem, a kind of Sophie’s Choice scenario in which a train is bearing down on two sets of people. If you do nothing, it kills five people. If you actively pull a lever, the signals switch and it kills one person. You’d have to choose.

Critics of AI often adapt this to self-driving cars. A child runs into the road and there’s no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision? There are many variations on this theme, and MIT even collected some of them into an online game.

There are classic counter arguments: the self-driving car wouldn’t be speeding in a school zone, so it’s less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses.

You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation? Yasemin Erden, a senior lecturer in philosophy at Queen Mary’s University, has an answer for that. She spends a lot of time considering ethics and computing on the committee of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.

Decisions in advance suggest ethical intent and incur others’ judgement, whereas acting on the spot doesn’t, she points out.

“The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents,” she says. Or in other words, as long as you were driving responsibly it’s considered ok for you to say “that person just jumped out at me” and be excused for whomever you hit, but AI algorithms don’t have that luxury.

If computers are supposed to be faster and more intentional than us in some situations, then how they’re programmed matters. Experts are calling for accountability.

“I’d need to cross-examine my algorithm, or at least know how to find out what was happening at the time of the accident,” says Kay Firth-Butterfield. She is a lawyer specialising in AI issues and executive director at AI Austin.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

Big Data Needs The C-Suite’s Trust

26 Oct, 2017

In a recent survey of 2,165 data professionals commissioned by KPMG and conducted by Forrester Consulting, 49% of respondents expressed …

Read more

CancerLinQ Big Data Analytics a “Powerful Tool” for Oncology

26 Jun, 2016

The American Society of Clinical Oncology (ASCO) isn’t the only one attempting to bring “learning” big data analytics capabilities to …

Read more

Is Big Data making CIOs smarter?

15 Dec, 2016

Many claims for big data projects suggest that just collecting the data somehow gives business the insight they need. The …

Read more

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.