Killer Robots Debate
- Not opposing robotics
- "Life-saver" whose lives are being saved? And who decides who should be saved? Do we enter necropolitics - deciding who should be let to live and who should die
- Funding of robotics from military is ultimately to kill other people
- Bomb disposal robots (Atlas) are also weaponised
- Drones are not just for surveillance but also for bombing
- Drones giving surveillance images are only able to detect heat images
- If you attack someone in war, they should have the means of reciprocity. Drone warfare makes that impossible. Nobody can surrender. Terrorists organisations use drone warfare as a means of recruitment.
- The scientists who developed the atomic bomb were not the ones who made the decision on whether or not to drop it on Japan
- Fully autonomous weapons would decide who lives and dies, without further human intervention, which crosses a moral threshold. As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices.
- It’s unclear who, if anyone, could be held responsible for unlawful acts caused by a fully autonomous weapon: the programmer, manufacturer, commander, and machine itself. This accountability gap would make it is difficult to ensure justice, especially for victims.
- Fully autonomous weapons could be used in other circumstances outside of armed conflict, such as in border control and policing. They could be used to suppress protest and prop-up regimes. Force intended as non-lethal could still cause many deaths.
- Whose regulation is it? The data that is being input comes from a certain bias
- How can a robot be held accountable for killing a person?
- There is no such thing as humane murder
- Economic injustice
- Technology has a racist overtone
- How are robots equipped to decipher between terrorists and civilians? Who is building the technology to decide that, and what are their biases?
Comments
Post a Comment