Fayne Davis
The innate tendency of humanity to keep killing each other on the battlefield has led to a race for superior technology design in order to prevent human losses during warfare and thus shorten wars. Many would argue for weapon development to become obsolete because they fear that an advanced autonomous technological system design could take over man kind or fall into the hands of evil rulers. As long as there is published research available in design technology there will be those that desire to create better systems. Designers such as Ron Arkin and Peter Asaro find it inevitable that autonomous robots will be a major addition to future wars as technological competition continues. This investigation will identify each of their concerns, and compare and contrast each of their well-intended views regarding the use and design of automated systems. While Arkin thinks he can model morality by using pre-existing systems for conducting war, Asaro thinks he can model a moral user. In Arkin’s article “Ethical Robots in Warfare” he presents a proposal for autonomous robots that follow a set of rules, like the Laws of Armed Conflict (LOAC) and Rules of Engagement (ROE). The robot would be able to be stopped if it is about to commit a war crime or a possibility exist for indiscriminate killing of innocent civilians. Arkin’s approach assumes that a set of rules can be developed to follow, which will guarantee nothing unethical occurs during fighting (Page, 30). This application would also hold humans accountable for their actions. Arkin aims to make the built in laws universal to all nations in order to hold those nations or individuals that do not adhere accountable for any acts deemed inhumane. Arkin further supports his argument by stating, “robots can be better than soldiers in conducting warfare in certain circumstances” (30) because human emotions like anger, frustration, stress, and a desire for victory (30) can impact soldiers to act inhumanely in the battlefield.
In Asaro’s article “Modeling the Moral User,” he presents a proposal for a user-centered approach, which seeks to understand how people make ethical decisions (Page, 23). This would take into account the roles of human emotions, empathy, and stress, when considering lethal decisions. Asaro wants to employ these in his design. Arkin feels it is better to take these human factors out of the equation. Asaro adds that a user-centered approach would actually make it harder for people to use robots unethically. This is because they would be better informed and more aware of the moral implications of their use of the system (20). This affirms his belief to improve the lethal decision making of humans. Asaro states this is accomplished by an ”increase in empirical knowledge about what kinds of information people use to make various ethical decisions, how they process that information, and how the presentation and representation of the information influences ethical decision making tasks”. He feels that it may be impossible to make a lethal tele-operated system ethical (23). This is because different cultures have different and conflicting moral values. Therefore, upholding the laws would be more difficult for an International community with differing values concerning ethics. In addition, Asaro adds it is better to keep humans in the loop and accept fallibility. This is in opposition to Arkins design, which would turn fate over to an automated system that may cause catastrophic failures. Asaro’s main disagreement with Arkin is that these systems can outperform humans while in the loop. According to Asaro, this will prevent humans from becoming to reliant on autonomous systems that have the ability to take life. Arkin offers several strong arguments for the use of Rule-Based Advisory Systems (31-2). Arkin