Exploring the risks of artificial intelligence
- by 7wData
“Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.”
These words, articulated by Neil Armstrong at a speech to a joint session of Congress in 1969, fit squarely into most every decade since the turn of the century, and it seems to safe to posit that the rate of change in technology has accelerated to an exponential degree in the last two decades, especially in the areas of artificial intelligence and Machine Learning.
Artificial intelligence is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation. This reaction alone is predictable, but can we really predict the associated risks involved?
It seems we’re all trying to get a grip on potential reality, but information overload (yet another side affect that we’re struggling to deal with in our digital world) can ironically make constructing an informed opinion more challenging than ever. In the search for some semblance of truth, it can help to turn to those in the trenches.
In my continued interview with over 30 artificial intelligence researchers, I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years.
Some results from the survey, shown in the graphic below, included 33 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).
Two “greatest” risks bubbled to the top of the response pool (and the majority are not in the autonomous robots’ camp, though a few do fall into this one). According to this particular set of minds, the most pressing short- and long-term risks is the financial and economic harm that may be wrought, as well as mismanagement of AI by human beings.
Dr. Joscha Bach of the MIT Media Lab and Harvard Program for Evolutionary Dynamics summed up the larger picture this way:
“The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.”
Essentially, the introduction of AI may act as a catalyst that exposes and speeds up the imperfections already present in our society. Without a conscious and collaborative plan to move forward, we expose society to a range of risks, from bigger gaps in wealth distribution to negative environmental effects.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read MoreCategories
You Might Be Interested In
Artificial intelligence transforming the built environment
1 Nov, 2016“Our machines should be nothing more than tools for extending the power of the human beings who use them.” This …
5 Problems And Solutions Of Adopting Extended Reality Technologies Like VR And AR
23 Jun, 2021Extended reality (XR) technologies, like virtual reality (VR) and augmented reality (AR), bring many benefits to us as consumers, and …
How Machine Learning is Revolutionizing Digital Enterprises
26 Oct, 2017According to the prediction of IDC Futurescapes, Two-thirds of Global 2000 Enterprises CEOs will center their corporate strategy on digital …
Recent Jobs
Do You Want to Share Your Story?
Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.