I Robot Research Paper

Submitted By kittenslits
Words: 2195
Pages: 9

Ethics and the Machine
A Toulmin Model Argument

In the 2004 film “I, Robot”, there is a recurring theme of apocalypticism associated with self-aware technology. The supreme robot intelligence VIKI (Virtual Interactive Kinetic Intelligence) evolves to gain a different understanding of the laws meant to control robots and their activities, and attempts to impose a new world order that subsumes human safety and continued survival with the surrender of mankind’s freedom of action. The notion that if robots were to become too advanced, they would seize control of the human population and take over the world is a popular theme in many a science-fiction novel; however, it is present even outside the boundaries of entertainment. Many people truly believe that such a development is an inevitability. This is just one of the many examples of the skepticism surrounding artificial intelligence. Considering the broad spectrum and capabilities of autonomous machines, can it truly be argued that the benefits of advancing the development of robotics are outweighed by visions of A.I. apocalypse? The aim of this essay is to explore the ideas from which the notion of cybernetic revolt originates and dispute its foundations. If humanity were to create sentient machines capable of invoking its own demise, it would be through its own failure to mitigate the risks associated with such an endeavour that these visions would reach fruition. Therefore, the advancement of robotics and the creation of artificial intelligence that could mirror or surpass humanity’s own should be pursued with the utmost perseverence.

The primary basis for fear of a robotic takeover is whether or not robots can be trusted. As autonomous machines become heavily integrated into modern society, their intended users rely on them more and more. Robotics have a variety of applications, ranging from industrial production to entertainment to home care and medicine. Thus arises the issue of trust - can these intelligent machines be trusted to complete the task for which they are programmed, fulfilling their purpose without compromising the safety of humans? Traditionally, the concept of trust involves responsibility; when someone says that they trust a person, the person is held accountable for the execution of a particular action. Trust is created with implicit promises and expectations; this establishes morality on the part of the trustee. The issue with trusting robots, it seems, is that they do not provide the typical indicators that humans construe as of them being trustworthy. They do not have the use of language or freedom as humans do; thus, they cannot be trusted in the same way that humans can be. People cannot expect robots to be entirely like humans - they are limited by the fact that they are artificial. However, people can accurately convey their expectations and machines can explain what they can and cannot do. The link between an artificially intelligent robot’s appearance and its primary function plays a key part in this; but if , for instance, a robot appears more human, does that mean it is more safe?
Mark Coeckelbergh states, "...our lives are already interwoven with technologies. They are not just tools we use to attain our goals (including goals concerning self-care or self-constitution); they are part of the social and existential fabric, from which we emerge as individuals and selves in the first place. In this sense, evaluating whether or not we can trust robots means to evaluate the social and ourselves as social beings."
Perhaps, rather than being the fault of the robot itself, distrust indicates that the person who was responsible for its programming and presentation did not make it sufficiently apparent that there is no reason to suspect the robot’s capabilities or intentions. With that in mind, it stands to reason that the intention of the creator must be questioned before the intention of the robot itself. In the case of