The Ethical Use of AI in the Security, Defense Industry

The Ethical Use of AI in the Security

Artificial Intelligence already surrounds us. The odds are that when you type an email or text, a grayed-out prediction appears ahead of what you have already typed.

The “suggested for you” portion of some web-based shopping applications predict items you may be interested in buying based on your purchasing history. Streaming music applications create playlists based on listening histories.

A device in your home can recognize a voice and answer questions, and your smartphone can unlock after recognizing your facial features. Artificial Intelligence is also advancing rapidly within the security and defense industry. When AI converges with autonomy and robotics in a weapons System, we should ask ourselves, “From an ethical standpoint, where should we draw the line?”

Air Force pilot Col. John Boyd developed a decision-making model referred to as the OODA loop — observe, orient, decide, act — to explain how to increase the tempo of decisions to outpace, outmaneuver and ultimately defeat an adversary. The objective is to make decisions faster than an opponent, compelling them to react to actions and enabling a force to seize the initiative. But what if the adversary is a machine?

Humans can no longer outpace the processing power of a computer and haven’t been able to for quite a while. In other words, a machine’s OODA loop is faster. In some instances, it now makes sense to relegate certain decisions to a machine or let the machine recommend a decision to a human.

From an ethical viewpoint, where should we allow a weapon System to decide an action and where should the human be involved?

In February 2020, the Defense Department formally adopted five principles of artificial intelligence Ethics as a framework to design, develop, deploy and use AI in the military. To summarize, the department stated that AI will be responsible, equitable, traceable, reliable and governable.

It is an outstanding first step to guide future developments of AI; however, as with many foundational documents, it is lacking in detail. An article from DoD News titled, “DoD Adopts 5 Principles of Artificial Intelligence Ethics,” states that personnel will exercise “appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.”

Perhaps the appropriate level of judgment depends on the maturity of the technology and the action being performed.

Machines are very good at narrow AI, which refers to the accomplishment of a single task such as those previously mentioned. Artificial general intelligence, or AGI, refers to intelligence that resembles a human’s ability to use multiple sensory inputs to conduct complex tasks.

There is a long way to go before machines reach that level of complexity. While achieving that milestone may be impossible, many leading AI experts believe that it is in the future.

AIMultiple, a technology industry analyst firm, published an article that compiled four surveys of 995 leading AI experts going back to 2009.

In each survey, a majority of respondents believed that researchers would achieve AGI on average by the year 2060. Artificial general intelligence is not inevitable though and technologists historically tend to be overly optimistic when making predictions concerning AI. This unpredictability, however, reinforces why one should consider the ethical implications of AI in weapon systems now.

One of those ethical concerns are “lethal autonomous weapon systems,” which can autonomously sense the environment, identify a target, and make the determination to engage without human input. These weapons have existed in various capacities for decades but nearly all of them have been defensive in nature such as the most simplistic form — a landmine — ranging up to complex systems such as the Navy’s Phalanx Close-in Weapon System.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

The road to AI leads through information architecture

14 Jan, 2018

Ford drove the first automobile down the streets of Detroit in 1890. It would take another 30 years before the …

Read more

50+ Data Science, Machine Learning Cheat Sheets, updated

18 Dec, 2016

Gear up to speed and have concepts and commands handy in Data Science, Data Mining, and Machine learning algorithms with …

Read more

How Artificial Intelligence Improves Software Development

30 May, 2022

Artificial Intelligence has improved the technology tools in all verticals. Software development, too, can be improved using AI. Artificial intelligence …

Read more

Recent Jobs

Senior Cloud Engineer (AWS, Snowflake)

Remote (United States (Nationwide))

9 May, 2024

Read More

IT Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Data Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Applications Developer

Washington D.C., DC, USA

1 May, 2024

Read More

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.