Artificial Intelligence Dbq

Words: 679
Pages: 3

Artificial Intelligence Technology today can do almost anything we want it to. With how fast it is moving, artificial intelligence could be taking over very soon. Because of the potential dangers of AI, its research and development should be stopped. Artificial intelligence could become so big that it could take over the world. Nick Bilton, a New York Times writer, says that AI, “could soon become unstoppable,” being so smart that we cannot stop them. A British inventor by the name of Clive Sinclair said, “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive.” These intelligent machines will be smarter than us, meaning that they can do more damage than we think. When PBS, a Public Broadcasting Station, was interviewing an AI robot the interviewer asked, “Do you thing that robots will take over the world?” and the robot responded with “ don’t worry, even if I evolve into terminator I will still be nice to you, I will keep you warm and safe in my people zoo where I can watch you for old time’s sake.” That is …show more content…
Bonnie Docherty, a lecturer on law at Harvard University, said “the race to build autonomous weapons with artificial intelligence is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.” Countries all around the world wanted nuclear weapons when they were first being made, they were going crazy over them. AI weapons would be wanted by everyone and it would eventually spiral out of control and possibly kill its own people. Elon Musk, a Silicon Valley resident, says that AI is, “potentially more dangerous than nukes.” He isn’t only comparing the two, he is telling us in a way that we can easily understand that AI will hurt you way more than nuclear