The Ethical Use of AI in the Security, Defense Industry
- by 7wData
Artificial Intelligence already surrounds us. The odds are that when you type an email or text, a grayed-out prediction appears ahead of what you have already typed.
The “suggested for you” portion of some web-based shopping applications predict items you may be interested in buying based on your purchasing history. Streaming music applications create playlists based on listening histories.
A device in your home can recognize a voice and answer questions, and your smartphone can unlock after recognizing your facial features. Artificial Intelligence is also advancing rapidly within the security and defense industry. When AI converges with autonomy and robotics in a weapons System, we should ask ourselves, “From an ethical standpoint, where should we draw the line?”
Air Force pilot Col. John Boyd developed a decision-making model referred to as the OODA loop — observe, orient, decide, act — to explain how to increase the tempo of decisions to outpace, outmaneuver and ultimately defeat an adversary. The objective is to make decisions faster than an opponent, compelling them to react to actions and enabling a force to seize the initiative. But what if the adversary is a machine?
Humans can no longer outpace the processing power of a computer and haven’t been able to for quite a while. In other words, a machine’s OODA loop is faster. In some instances, it now makes sense to relegate certain decisions to a machine or let the machine recommend a decision to a human.
From an ethical viewpoint, where should we allow a weapon System to decide an action and where should the human be involved?
In February 2020, the Defense Department formally adopted five principles of artificial intelligence Ethics as a framework to design, develop, deploy and use AI in the military. To summarize, the department stated that AI will be responsible, equitable, traceable, reliable and governable.
It is an outstanding first step to guide future developments of AI; however, as with many foundational documents, it is lacking in detail. An article from DoD News titled, “DoD Adopts 5 Principles of Artificial Intelligence Ethics,” states that personnel will exercise “appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.”
Perhaps the appropriate level of judgment depends on the maturity of the technology and the action being performed.
Machines are very good at narrow AI, which refers to the accomplishment of a single task such as those previously mentioned. Artificial general intelligence, or AGI, refers to intelligence that resembles a human’s ability to use multiple sensory inputs to conduct complex tasks.
There is a long way to go before machines reach that level of complexity. While achieving that milestone may be impossible, many leading AI experts believe that it is in the future.
AIMultiple, a technology industry analyst firm, published an article that compiled four surveys of 995 leading AI experts going back to 2009.
In each survey, a majority of respondents believed that researchers would achieve AGI on average by the year 2060. Artificial general intelligence is not inevitable though and technologists historically tend to be overly optimistic when making predictions concerning AI. This unpredictability, however, reinforces why one should consider the ethical implications of AI in weapon systems now.
One of those ethical concerns are “lethal autonomous weapon systems,” which can autonomously sense the environment, identify a target, and make the determination to engage without human input. These weapons have existed in various capacities for decades but nearly all of them have been defensive in nature such as the most simplistic form — a landmine — ranging up to complex systems such as the Navy’s Phalanx Close-in Weapon System.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More