Robot notes Essay

Submitted By SpencerHu99Gmail
Words: 755
Pages: 4

Uk has taken the view that programming might in the future represent an acceptable form of meaningful human control, and research into such possibilities should not be pre-emptively banned. In future, they might even reduce civilian casualties.

At the geneva meeting on LARs, many nations and experts supported the idea of “meaningful human control” however China believes that there exists no clear definition and no clear guideline as to what constitutes meaningful control. As a result, the boundaries between human control and autonomy can be blurred in the development of future technologies. Hence China wishes to propose an amendment to help define the term “meaningful control” as a preventative measure to a possible significiant humanitarian crisis. Currently, there are many disparities in the definition of ‘meaningful control’ and China seeks to clarify the definition. Furthermore, China believes that all robots build and used for military warfare must fall under the restrictions of meaningful control under our proposed amendment.
Moreover, on the issue of meaningful control, China believes that at the increasing rate of technology advancements, the day will come when fully autonomous robots are developed and considered for warfare. Fully autonomous robots, are categorised on a completely different level than current autonomous systems with their ability to select targets and engage without human intervention. Additonally, China believes that the root of the concern regarding autonomous robots is the fear of sentient robots. Therefore China, will propose a ban on the production, testing, acquirement and deployment of fully autonomous robots. Too often, the UN acts upon a humanitarian issue when it has happened, China believes it is time for the UN to take pre-emptive action on this issue and ban fully autonomous robots before they come into existence.
China believes that
Robotic weapons systems should not be making life and death decisions on the battlefield on their own as it would be inherently wrong, morally and ethically
Fully autonomous weapons are likely to violate international humanitarian law due to the their lack of moral consideration and conscience
The idea is that human control over life and death decisions must always be significant – in other words, it must be considerably more than none at all and, putting it bluntly, it must also involve more than the mindless pressing of a button in response to machine-processed information. According to current practice, a human operator of weapons must have sufficient information about the target and sufficient control of the weapon, and must be able to assess its effects, in order to be able to make decisions in accordance with international law. But how much human judgment can be transferred into a technical system and exercised by algorithms before human control ceases to be “meaningful“ – in other words, before warfare is quite literally “dehumanized“?
One thing seems clear: in the future, certain time limits