Developers have a moral duty to create ethical AI
- by 7wData
Developers of Artificial Intelligence (AI), machine learning (ML) and biometric-related technologies have “a moral and ethical duty” to ensure the technologies are only used as a force for good, according to a report written by the UK’s former surveillance camera commissioner.
Developers must be cognizant of both the social benefits and risks of the AI-based technologies they produce, and have a responsibility to ensure it is used only for the benefit of society, said the whitepaper, which was published by facial-recognition supplier Corsight AI in response to the European Commission’s (EC) proposed Artificial Intelligence Act (AIA).
“Organisational values and principles must irreversibly commit to only producing technology as a force for good,” it said. “The philosophy must surely be that we put the preservation of internationally recognised standards of human rights, our respect for the rule of law, the security of democratic institutions and the safety of citizens at the heart of what we do.”
It added a ‘human in the loop’ development strategy is key to assuaging any public concerns over the use of AI and related technologies, in particular facial-recognition technology.
“The most important ingredient of… [developing facial-recognition systems] is the human at the centre of the process,” it said. “Training, bias awareness, policies upon deployment, adherence to law, rules, regulations and ethics are key ingredients.
“Developers must work with the human to create a product that is human intuitive and not the other way around. Consideration of providing legal and regulatory support in the use of such sophisticated software must be a foremost consideration for developers.”
To make the technology more human-centric, the report further encourages developers to work closely alongside its client base to understand the user requirements and legitimacy of the project, as well as to support “compliance to statutory obligations and to build appropriate safeguards where vulnerabilities may arise”.
Speaking to Computer Weekly, the paper’s author Tony Porter – Corsight’s chief privacy officer and the UK’s former surveillance camera commissioner – said that when there have been cases of AI-related technologies such as facial-recognition being used unlawfully, it’s because of how they are deployed in a particular context rather than the tech in and of itself.
He added that part of his role at Corsight is to explain “the power of the technology, but also the correct and judicious use of it” to clients, which for Porter includes placing humans at the heart of the technology’s development and operation.
With police use of the tech in particular, Porter reiterated it is important for suppliers to “support the end users positive and enduring obligation to follow” the Public Sector Equalities Duty, mainly through greater transparency.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More