Google’s chief decision scientist: Humans can fix AI’s shortcomings
- by 7wData
Cassie Kozyrkov has served in various technical roles at Google over the past five years, but she now holds the somewhat curious position of “chief decision scientist.” Decision science sits at the intersection of data and behavioral science and involves statistics, machine learning, psychology, economics, and more.
In effect, this means Kozyrkov helps Google push a positive AI agenda — or, at the very least, convince people that AI isn’t as bad as the headlines claim.
“Robots are stealing our jobs,” “AI is humanity’s greatest existential threat,” and similar proclamations have abounded for a while, but over the past few years such fears have become more pronounced. Conversational AI assistants now live in our homes, cars and trucks are pretty much able to drive themselves, machines can beat humans at computer games, and even the creative arts are not immune to the AI onslaught. On the flip side, we’re also told that boring and repetitive jobs could become a thing of the past.
People are understandably anxious and confused about their future in an automated world. But, according to Kozyrkov, artificial intelligence is merely an extension of what humans have been striving for since our inception.
“Humanity’s story is the story of automation,” said Kozyrkov, speaking at the London AI Summit this week. “Humanity’s entire story is about doing things better — from that first moment that someone picked up a rock and banged another rock with it because things could get done faster. We are a tool-making species; we rebel against drudgery.”
The underlying fear that AI is dangerous because it can do things better than humans doesn’t hold water for Kozyrkov, who argues that all tools are better than humans. Barbers use scissors to cut hair because clawing it out with their hands would be a less-than-desirable experience. Gutenberg’s printing press enabled the mass production of texts at a scale that would have been impossible for humans with pens to replicate. And pens themselves opened up a whole world of opportunity.
“All of our tools are better than Human — that’s the point of a tool,” Kozyrkov continued. “If you can do it better without the tool, why use the tool? And if you’re worried about computers being cognitively better than you, let me remind you that your pen and paper are better than you at remembering things. My bucket is better than me at holding water, my calculator is better than me at multiplying six-digit numbers together. And AI is also going to be better at some things.”
Of course, the underlying fear many hold in relation to AI and automation isn’t that it will be better at things than humans. For many, the real danger lies in the unbridled scale with which governments, corporations, and any ill-meaning entity could cast a dystopian shadow over us by tracking and micro-managing our every move — and achieving a clandestine grand vision with next to no effort.
Other concerns relate to factors such as algorithmic prejudices, a lack of sufficient oversight, and the ultimate doomsday scenario: What if something goes drastically — and unintentionally — wrong?
Researchers have already demonstrated the inherent biases in facial recognition systems such as Amazon’s Rekognition, and Democratic presidential candidate Senator Elizabeth Warren recently called on federal agencies to address questions around algorithmic bias, such as how the Federal Reserve deals with money lending discrimination.
But less attention is given to how AI can actually reduce existing Human biases.
San Francisco has recently claimed it will use AI to reduce bias when charging people with crimes, for example, by automatically redacting certain information from police reports. In the recruitment realm, VC-backed Fetcher is setting out to help companies headhunt talent by leveraging AI, something it claims can also help minimize human prejudices.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More