Credit: Annegret Hilse/Reuters via Gallo ImagesMONTEVIDEO, Uruguay, December 17 (IPS) - Machines with no conscience are making split-second decisions about who lives and who dies. This isn’t dystopian fiction; it’s today’s reality. In Gaza, algorithms have generated kill lists of up to 37,000 targets.
Autonomous weapons are also being deployed in Ukraine and were on show at a recent military parade in China. States are racing to integrate them in their arsenals, convinced they’ll maintain control. If they’re wrong, the consequences could be catastrophic.
Unlike remotely piloted drones where a human operator pulls the trigger, autonomous weapons make lethal decisions. Once activated, they process sensor data – facial recognition, heat signatures, movement patterns — to identify pre-programmed target profiles and fire automatically when they find a match. They act with no hesitation, no moral reflection and no understanding of the value of human life.
Speed and lack of hesitation give autonomous systems the potential to escalate conflicts rapidly. And because they work on the basis of pattern recognition and statistical probabilities, they bring enormous potential for lethal mistakes.
Israel’s assault on Gaza has offered the first glimpse of AI-assisted genocide. The Israeli military has deployed multiple algorithmic targeting systems: it uses Lavender and The Gospel to identify suspected Hamas militants and generate lists of human targets and infrastructure to bomb, and Where’s Daddy to track targets to kill them when they’re home with their families. Israeli intelligence officials have acknowledged an error rate of around 10 per cent, but simply priced it in, deeming 15 to 20 civilian deaths acceptable for every junior militant the algorithm identifies and over 100 for commanders.
The depersonalisation of violence also creates an accountability void. When an algorithm kills the wrong person, who’s responsible? The programmer? The commanding officer? The politician who authorised deployment? Legal uncertainty is a built-in feature that shields perpetrators from consequences. As decisions about life and death are made by machines, the very idea of responsibility dissolves.
These concerns emerge within a broader context of alarm about AI’s impacts on civic space and human rights. As the technology becomes cheaper, it’s proliferating across domains, from battlefields to border control to policing operations. AI-powered facial recognition technologies are amplifying surveillance capabilities and undermining privacy rights. Biases embedded in algorithms perpetuate exclusion based on gender, race and other characteristics.
As the technology has developed, the international community has spent over a decade discussing autonomous weapons without producing a binding regulation. Since 2013, when states that have adopted the UN Convention on Certain Conventional Weapons agreed to begin discussions, progress has been glacial. The Group of Governmental Experts on Lethal Autonomous Weapons Systems has met regularly since 2017, yet talks have been systematically stalled by major military powers — India, Israel, Russia and the USA — taking advantage of the requirement to reach consensus to systematically block regulation proposals. In September, 42 states delivered a joint statement affirming their readiness to move forward. It was a breakthrough after years of deadlock, but major holdouts maintain their opposition.
To circumvent this obstruction, the UN General Assembly has taken matters into its hands. In December 2023, it adopted Resolution 78/241, its first on autonomous weapons, with 152 states voting in favour. In December 2024, Resolution 79/62 mandated consultations among member states, held in New York in May 2025. These discussions explored ethical dilemmas, human rights implications, security threats and technological risks. The UN Secretary-General, the International Committee of the Red Cross and numerous civil society organisations have called for negotiations to conclude by 2026, given the rapid development of military AI.
The Campaign to Stop Killer Robots, a coalition of over 270 civil society groups from over 70 countries, has led the charge since 2012. Through sustained advocacy and research, the campaign has shaped the debate, advocating for a two-tier approach currently supported by over 120 states. This combines prohibitions on the most dangerous systems — those targeting humans directly, operating without meaningful human control, or whose effects can’t be adequately predicted — with strict regulations on all others. Those systems not banned would be permitted only under stringent restrictions requiring human oversight, predictability and clear accountability, including limits on types of targets, time and location restrictions, mandatory testing and requirements for human supervision with the ability to intervene.
If it’s to meet the deadline, the international community has just a year to conclude a treaty that a decade of talks has been unable to produce. With each passing month, autonomous weapons systems become more sophisticated, more widely deployed and more deeply embedded in military doctrine.
Once autonomous weapons are widespread and the idea that machines decide who lives and who dies becomes normalised, it will be much hard to impose regulations. States must urgently negotiate a treaty that prohibits autonomous weapons systems directly targeting humans or operating without meaningful human control and establishes clear accountability mechanisms for violations. The technology can’t be uninvented, but it can still be controlled.
Inés M. Pousadela is CIVICUS Head of Research and Analysis, co-director and writer for CIVICUS Lens and co-author of the State of Civil Society Report. She is also a Professor of Comparative Politics at Universidad ORT Uruguay.
For interviews or more information, please contact [email protected]
© Inter Press Service (20251217065522) — All Rights Reserved. Original source: Inter Press Service

12 hours ago
1










English (US) ·