1. If superintelligence is feasible, it will likely be developed sooner or later. 2. Clearly superintelligence is feasible since 3. we have no reason to think computer technology will not continue to advance exponentially. 4. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. 5. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. 6. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. 7. If, on the other hand, we get nanotechnology first, we will have to face both the
Obviously either we will develop nanotechnology first or we will develop superintelligence first.
8 . The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.
2 and maybe 8 is a intermediate conclusion
4 “therefore” conclustion
4 8(which concerts to the main conclusion 4)
Premise 1 and 7 has a false dilemma
The whole argument to be a slippery slope : camels nose
Finally, UAS [drones] are criticized [in the article] on the grounds that they are manned by people sitting in air-conditioned offices in Nevada or Florida, playing around with a joystick before they go home to have dinner and coach Little League. According to Mayer, ethicist Peter W. Singer believes that the drone technology is “‘seductive,’ because it creates the perception that war can be ‘costless.’”