Cyberattacks, information operations, AI and lethal autonomous weapons systems challenge our realm
Technological advances and rising risks posed by cyberattacks, AI, and autonomous systems are outpacing the rules. This shift blurs the lines between civilians and combatants, threatening core principles of warfare and civilian protection. Changes in our world occurring faster than ever before.
The concern of International Committee of the Red Cross is that technological developments and the use of cyber operations are outpacing normative discussions and developments regarding AI and cyberattacks. I agree. I’m also pleased that the new 5-year Challenges document is out and highlights this dimension, having contributed to the previous one in 2019, where it was already evident and emerging. But a lot has changed since then, including the daily use of cyber operations and unmanned vehicles in high-intensity armed conflict (primarily Russian war in Ukraine, but also elsewhere), the engagement of civilians, and the blurring of lines between civilians and combatants, which jeopardizes a critical pillar of the laws of armed conflict – distinguishing between those who fight and those who must be protected.
There’s also the issue of tangible risks from information operations (propaganda, warfare, etc.), which can be highly destabilizing, and I am not even mentioning the risks of hate, violence, or sabotage. Although it's challenging to prove direct cause and effect in this context, information operations may contribute to violence, causing psychological harm, limiting access to essential services, and disrupting humanitarian efforts. A specific example of this is the use of deepfakes during military operations, where the ICRC emphasizes that warring parties must consistently ensure the protection of civilians and civilian objects, including during employment of deepfakes (lethal uses of deepfakes were not recorded to date).
Furthermirem the ICRC states that cyber operations can kill. I agree, which is also why, after leaving the ICRC, I highlighted these novel risks in my books Philosophy of Cybersecurity and Propaganda.
There’s also another elephant in the room. When civilian tech companies (IT, cyber threat intelligence, data centers, etc.) are contracted to provide cybersecurity or other ICT services to armed forces (connectivity, communication, cloud computing, or remote sensing, etc.), there is a risk that their assets, infrastructure, and even employees could lose their legal protection from attack by a party to the armed conflict, being treated as legitimate military objectives. I do not recommend any civilian to voluntarily become a military objective, especially unconsciously.
The use of civilians and their smartphones are sensors in weapons detection may remove the typical protections guaranteed to civilians.
The increasing risks of AI and Lethal Autonomous Weapon Systems (LAWS) adds another layer of complexity. These technologies raise concerns about accountability, adherence to International Humanitarian Law, and the potential for indiscriminate harm. As AI systems take on more decision-making roles, the risk of civilian casualties and violations of the laws of war becomes even more pronounced.
What happens by 2029 is simply unimaginable.