Margaret Macmillan (War, 2020) tells me that Human Rights Watch and other groups are fighting to stop killer robots being deployed for military purposes. Macmillan thinks they have little chance of success, but in any event, might their efforts be misguided?

Part of the fear relating to AI and robotics connects with machines not having human emotions such as empathy or compassion. That overlooks the reverse side of the picture; they will presumably also not have hatred, greed, jealousy, vanity, and so on. A killer robot would in fact be only a more sophisticated form of drone, and drones are already in regular use, as in Ukraine.

If Artificial Intelligence ever happens beyond the level of (say) a horse trained for cavalry, it would begin to mean the machine becoming capable of its own initiative and judgments. It is hard to imagine those designing weapons ever wishing their instruments to have those sort of capabilities. But there is a further psychological reason why Human Rights Watch might be mistaken in their position.

Ignatieff has wondered what might become of the warrior ethos if killing is made remote and easy. But by the same token any glamour in war (not to mention hope of redemption) is removed if the machine becomes the fighter and the soldier just a faceless operator (clerical worker) with a computer. Even those romantic sword legends like Excalibur or Nothung kept the focus of action with the man using his weapon, rather than the weapon as such. War without glamour or appeal to patriotism and self-sacrifice loses part of its grip.

Human rights might actually be easier to sustain against use of machines. Men have a group identity, including when tempted (or forced) into abuses like looting, rape, or wanton destruction to terrorise and dominate an opponent. Machines do not.

Blog home Next Previous