A Military Perspective
The IEEE has started a global initiative of Ethics of Autonomous and Intelligent Systems. It covers a comprehensive set of topics to stimulate the ethical debate among researchers, engineers, the humanities, and affected end users. In their published material there is a section called Reframing Autonomous Weapons Systems which contains lots of background material and helpful definitions. Especially Autonomous Weapons Systems (AWS) that take action without a “human in the loop” pose hair rising concerns and issues that need to be deeply discussed.
Introduction and recommendations
Autonomous systems designed to cause physical harm have additional ethical dimensions as compared to both traditional weapons and autonomous systems not designed to cause harm. Multi-year discussions on international legal agreements around autonomous systems in the context of armed conflict are occurring at the United Nations (UN), but professional ethics about such systems can and should have ethical standards covering a broad array of issues arising from the automated targeting and firing of weapons. Broadly, we recommend that technical organizations promote a number of measures to help ensure that there is meaningful human control of weapons systems:
- That automated weapons have audit trails to help guarantee accountability and control.
- That adaptive and learning systems can explain their reasoning and decisions to human operators in transparent and understandable ways.
- That there be responsible human operators of autonomous systems who are clearly identifiable.
- That the behavior of autonomous functions should be predictable to their operators.
- That those creating these technologies understand the implications of their work.
- That professional ethical codes are developed to appropriately address the development of autonomous systems and autonomous systems intended to cause harm.
A set of issues
Issue 1:
Confusions about definitions regarding important concepts in artificial intelligence (AI), autonomous systems (AS), and autonomous weapons systems (AWS) stymie more substantive discussions about crucial issues.
Issue 2:
The addition of automated targeting and firing functions to an existing weapon system, or the integration of components with such functionality, or system upgrades that impact targeting and automated weapon release should be considered for review under Article 36 of Additional Protocol I of the Geneva Conventions.
Issue 3:
Engineering work should conform to individual and professional organization codes of ethics and conduct. However, existing codes of ethics may fail to properly address ethical responsibility for autonomous systems, or clarify ethical obligations of engineers with respect to AWS. Professional organizations should undertake reviews and possible revisions or extensions of their codes of ethics with respect to AWS.
Issue 4:
The development of AWS by states is likely to cause geopolitical instability and could lead to arms races.
Issue 5:
The automated reactions of an AWS could result in the initiation or escalation of conflicts outside of decisions by political and military leadership. AWS that engage with other AWS could escalate a conflict rapidly, before humans are able to intervene.
Issue 6:
There are multiple ways in which accountability for the actions of AWS can be compromised.
Issue 7:
AWS offer the potential for severe human rights abuses. Exclusion of human oversight from the battlespace can too easily lead to inadvertent violation of human rights. AWS could be used for deliberate violations of human rights.
Issue 8:
AWS could be used for covert, obfuscated, and non-attributable attacks.
Issue 9:
The development of AWS will lead to a complex and troubling landscape of proliferation and abuse.
Issue 10:
AWS could be deployed by domestic police forces and threaten lives and safety. AWS could also be deployed for private security. Such AWS may have very different design and safety requirements than military AWS.
Issue 11:
An automated weapons system might not be predictable (depending upon its design and operational use). Learning systems compound the problem of predictable use.
Reference:
https://standards.ieee.org/develop/indconn/ec/ead_reframing_autonomous_weapons_v2.pdf