Breaking from my trend of reviewing a book I've just read, I have to get on my soapbox and ask, "Are these people crazy?" A "fully autonomous weapon" is one that can be used to kill a human being without human input. The May 26, 2014 issue of TIME Magazine, page 9, informs us that over the past few years, "the U.S., the U.K. and South Korea have developed drones with technology that could be repurposed to create machines with the ability to open fire without human input."
Are you kidding me? Has no one seen the Terminator movies? Ever heard of Skynet? This is insane.
The TIME article goes on to say that opposition groups have been pushing for "a ban on further developing or deploying the technology ..." Proponents have pushed back, saying a ban would be premature, and in time the technology could be advanced enough to reduce collateral damage.
Collateral damage? Is that the only thing we should worry about? Of course not. How about the fact that by distancing ourselves from actually pulling the trigger -- or pushing the button -- we make it even easier to kill without compunction. "I didn't kill those people," an officer might say. "The drone did it." Or what if we make drones more and more intelligent, so they can autonomously do our dirty work, only to lose control over them?
Opponents of this technology are hoping the U.N. will issue a ban, but that could take years to materialize. When a ban on blinding lasers was proposed to the U.N. in 1987, it took eight more years before the ban was issued and three more years before it went into effect.
I worry that the pace of technology development for military purposes may far exceed the pace at which governments can come together to agree on how to use or not use new technology.