Whether we want it or not, cyber operations by militaries are today’s reality. They are here to stay. But admittedly this fraction of statecraft is quite new. So it’s notable that the International Committee of the Red Cross just published its report (here) on military cyber operations. Previously I described another report, the humanitarian consequences of cyber operations (disclaimer: I’m the author). The rest of this post is the analysis of ICRC’s report on military cyber operations.

The report starts with a sober realisation

“While a number of States oppose the militarization of cyberspace, military cyber operations are firmly established as a role for the armed forces of a number of States, although the span of responsibilities in cyberspace varies between States“

Military cyber operations/attacks/etc are here and are here to stay. But it’s not always clear what these cyber operations are. So the report makes it clear that the focus is on the disruptive or destructive potential.

“The boundaries of what constitutes offensive cyber operations remain debated, but there is the potential to achieve destructive and/or disruptive outcomes. The conduct of such operations in armed conflict may involve preparatory measures on adversary networks prior to the outbreak of a conflict “

The report says that so far only three world countries disclosed their use of offensive military operations during armed conflict:

“At this time the only publicly declared use of cyber operations during armed conflicts has been those by the US, UK and Australia, most notably against the ISg. This was targeted primarily against propaganda facilities but was also apparently used to target individuals and facilities for kinetic strikes“

To this list, I would add another country that confirmed that they make cyber operations in the context of armed conflict, France. In other words, these are operations in cyberspace that are meant to achieve some military objectives.

Cyberspace?

The view of cyberspace in this document appears to be inspired with or similar to the UK’s Ministry of Defence one which refers to the many layers of cyberspace (including physical, virtual, and cognitive spaces)

Role of armed forces in cyber

Can CERTs be targeted in a conflict?

According to some cyber norms - no. They should not. But during the expert meeting held by the ICRC, it was said that

“the norms of responsible State behaviour in cyberspace adopted by the UN provide that CERTs should not be deliberately harmed by other States. However, it was suggested that this protection might be affected if during an armed conflict a given CERT was protecting critical infrastructure that had become a military objective “

This is a challenge, and perhaps States would want to more clearly delineate between the civilian CERTs and the military teams. For example, it is not unheard of when civilian CERTs participate alongside the military teams in certain cybersecurity exercises (i.e. was this the case of NATO’s Locked Shields?). Some additional examples around the difficulties of dividing the work were given:

“This is the case particularly if, as in Australia and the UK, the national CERT is part of the national cyber security centre which is, in turn, a part of the national signals intelligence agency (ASD and GCHQ, respectively) - which is also involved in offensive cyber operations.“

This may make some activities more tricky.

Rules applying to cyber

There are two sets of rules in general. Those applying in peace-time (where rules are generally... unclear) and war-time (rules are generally… clearer?). This is somewhat counter-intuitive because it gives an impression that the understanding is better developed in the case of wartime. This is not exactly a problematic division per se since it’s clear that the operations below the threshold of armed conflict cannot “deliver” effects (such as destruction or lethal effects). Still, rules around any limits in peacetime are less understood. But what happens when peacetime operations smoothly transition to wartime ones?

“... raises the question of how operations transition from collecting intelligence to delivering effects given these concerns and the fact that the international legal framework governing effects operations is much more developed than the framework governing intelligence operations. Experts emphasized that as soon as effects were under consideration, targeting methodologies and safeguards developed for military operations should be employed”

It’s also nicely commented that certain entities may employ totally different tools between the non-military and military applications:

“if an operation was to move from intelligence collection to the delivery of effects, then the original cyber tools and infrastructure used to conduct intelligence collection would normally be withdrawn and new or different cyber tools and infrastructure deployed“

Reasons for such a change are surely nuanced, but not they are not due to the policy: “this transition would be linked to a change in authorities and targeting methodologies, the primary reason was to preserve intelligence collection capabilities for future use

Lack of experience with cyber operations

Collectively and as a world, we generally do not have much experience with cyber operations. This is a challenge for decision-makers.

“limited amount of experience with these operations to date meant that politicians and policymakers often lack the necessary understanding to be able to make effective decisions to guide, supervise or approve military cyber operations“

After all, and indeed, perhaps there would be some value in having Master’s programs in Cybersecurity and Cyberwarfare Policy? I am sure that this part of the report will be universally welcomed by a lot of higher education institutions or universities that run some international relations or political science cybersecurity programs.

Freely available vulnerability information is bad?

This is a very controversial part from the full-disclosure policy point of view. The report appears to identify Exploit Database or Kali Linux as if they were some ultimate cyber offensive tools. Furthermore, the report considers withholding information about software vulnerabilities.

“ Two projects maintained by the information security company Offensive Security were mentioned, the Exploit Database and Kali Linux. It was argued that the malicious exploitation of such publicly available resources posed a risk of civilian harm comparable to that caused by military cyber operations, but possibly without any legal and/or political oversight. “

Which was later summarised with “ States should consider taking measures to address the availability of vulnerabilities and cyber tools on the internet and to prevent their exploitation by malicious actors “. This is the controversial part. It is controversial because security engineers, again and again, repeat that disclosure is complex but it helps in improving security. I wonder whether those who would benefit the greatest would actually be some software companies because the pressure to fix the security of their products would be waived? Anyway, I leave this discussion to other people, times, and places.



Risk of civilian harm

Damage and harm may be ‘collateralised’ and sometimes difficult to anticipate in advance. We often see this when systems are brought down when a ransomware infection is suspected. This is done to protect the systems, but of course, it contributes to the system disruption:

“Cyber operators should also take into account that an action taken in response to an otherwise precisely targeted cyber operation may inadvertently cause a greater impact and hence risk civilian harm in a way that the original operation had not intended. For example, an actor conducting an attack on a military facility might seek to disrupt the power supply to that facility. In doing so, it accesses a local power grid; but the CERT, seeing this, believes it to be part of an attack on that grid. The CERT thus shuts down the power grid to manage the perceived threat, which in turn cuts off the electricity supply to the military facility as well as the civilian population“

This means that the reach of a cyber operation may be broader in practice - systems may be affected in ways not anticipated by the attackers.

The report points to an interesting requirement:

“In the planning and conduct of military cyber operations, States should involve expertise from a wide range of sources which, in turn, should be put into straightforward language for decision-makers who might be less familiar with military cyber operations and the risk of civilian harm these operations entail"

Indeed, “cyber-language” may be hermetic, so it must be conveyed properly, with the use of appropriate analogies, etc. (and at the same time, the analogies should not be flawed, for example comparing an SSH exploit to an air-to-surface missile or a torpedo, might not be optimal, so let’s hope the people are much more creative and credible here…)

There’s also one element related to the need by States to have an up to date and accurate asset control/knowledge/management:

“States should have a sufficient understanding of the critical connections and dependencies in their own networks in order to be able to focus defensive efforts at the key nodes and to restore their functionality in the event of a destructive or disruptive attack.“

This can be promptly summarised with “good luck with that!”. To be a bit more precise: we know that system controllers very often are very far from such a state, which means this is kind of a wishful-thinking. We may complain about such a state of matters, but that’s how it may be in practice.

Measures to avoid or reduce civilian harm from military cyber operations during armed conflicts

Insight over the measures to avoid harm open in a very interesting way: responsibilities are both on the attacker and the defender sides. “there was a responsibility on those taking offensive action and on those whose networks and systems were at risk of being attacked. This duality of responsibility is also reflected in the norms of responsible State behaviour in cyberspace adopted by the UN. Specifically, one of the norms mandates that States should not conduct cyber operations that would damage critical infrastructure, while another one provides that States should take appropriate measures to protect their own critical infrastructure“

Technical measures

As to the employment of malware (“cyber weapons”), the following technical means are listed as those that could be implemented as precautions

  • system-facing - technically condition the malware in ways so that it only works (i.e. deliver attack payload) on the targeted system, ‘sparing’ unrelated systems. The one example given was Stuxnet, which “was designed to only operate in a system with a specific configuration of Siemens programmable controllers”. While we could imagine better (i.e. technically more illustrative) examples, this one is factually correct.
  • geo-fencing - a type of the above, limit the malware operation to the IP ranges. This may be error-prone if IP ranges turn out not allocated contiguously, a remark not present in the report
  • kill-switch - build a technical ‘switch off’ that stops the operation of the malware, conditioned on something: “built-in kill switches that for example disable the capability after a specified time period could contribute to decreasing the risk of repurposing and/or spread. Encryption of the capability would also contribute to reducing the risk of reengineering
  • auto-delete - remove itself from the system after executing the payload. But this is a double-edged sword that may make it more difficult to understand what happened (and a technique that actually improves the stealth factor of the operation)
  • using a ransomware-style malware with the release of decryption password after the end of the operation, so the victim/target can bring back the systems (cf. some countries, such as France, could consider this a use of force)

This content is then summarised with “Depending on the situation, a combination of several of these risk-mitigation functionalities could be used for the same malware to minimize the risk as much as possible“. This is interesting because these so-called “risk-mitigation functionalities” are not assessed in the context of whether the techniques implemented would even be considered in practice. If I view it right, all other considerations are left out for the ‘interested parties to consider.

There’s also another way of ensuring that operations are made more safely: execute them manually, every time: “the most reliable way of ensuring that an offensive cyber tool did not cause unexpected effects and civilian harm was to keep it under direct command and control through all phases of the operation.” It is admitted that continuous command and control also increased the risk of the operation being detected and attributed but it was noted that such a risk was likely to be considered less significant during periods of armed conflict

There was also a clear operational example: “the attacker could avoid considerable harm to civilians by tailoring the operation to only corrupt the data related to specific terminals, ships or containers rather than putting ransomware on the entire system, which would have the effect of shutting down the entire port … This way, the party to the conflict would limit the effect of its operation to causing disruption to the adversary’s armed forces logistics capabilities while leaving civilian shipping unaffected

As we may imagine, the above-mentioned operation would still result in shutting down an entire (ship) port. This would in practice happen out of precaution. This is, for example, why recently Ireland's public health system (HSE) shut down a lot of systems simply out of precaution. It appears that the understanding of such causal links of interconnections in IT systems are not present in the report, even though we have evidence of their impacts. So: “If only this was so simple!”. But is it really?

Furthermore, it is maybe “the most reliable”, but of course only under the assumptions that the operators would understand the implications of taking some action. The remark about disregarding stealthy operations during armed conflict is interesting.

Self-burning of tools as a policy?

Is the following remark a stretch? “it was also suggested that States might want to submit their malware once it had been used to a site such as VirusTotal “. Now let’s imagine a country X that used malware Y and then self-burns this malware by disclosing it in the public (and this way makes it impossible to re-use it in the future, but also puts the previous uses in a new context). How realistic is it?

Cyber-enabled information operations

“States and other stakeholders should work towards a better understanding of the opportunities that information operations offer to avoid civilian harm during armed conflicts and the risks they may pose."

The report explores a creative (positive) use of information operations directed at civilians, to warn them of the effects of ensuing attacks. The practical ways of achieving this task may be e.g. via hacking of cellular operators to send text messages… This is heavily written in the context of wartime in mind.

Digital watermarks

“consideration should be given to digitally marking the equivalent facilities in cyberspace thus ensuring that there is no doubt as to their role. Further, States need to ensure that they protect essential civilian infrastructure“

The idea to digitally ‘mark’ certain critical infrastructure. The problem: the fact that some of such infrastructure is treated as sensitive/critical is often confidential in peacetime. So make it public in wartime?

Future

it was suggested that blockchain would continue to contribute to improving cyber defence”. But apparently no concrete example was given to back up this statement, so we must disregard it. But this is the fuzzy part of the report, with vague statements such as “quantum computing would have a significant impact“. If I understand it correctly, many reports today have to have such clauses, if just to have them.

“States and other stakeholders should continue to study the impact of the development of quantum computing, including the risks to civilians potentially posed by the quantum- enabled growth of computational power and the associated dramatic increase in the speed and scale of cyber and other operations“

If it is not clear to the report drafters, States are not doing this. But why would the ‘quantum-enabled growth of computational power’ cause a ‘dramatic increase in the speed and scale of cyber operations’? I feel someone had some thoughts in there but these are not appropriately conveyed.

Kali Linux as an advanced cyber tool?

“It was also highlighted that the availability of Kali Linux since 2013 had further increased the range of tools available for malicious actors”.

Now, what did we just read? It seems as if Kali Linux, very helpful in security engineering, was mentioned as an example of an advanced and dangerous tool :-)

“Other measures that could be considered include segregating military from civilian cyber infrastructure and networks and segregating computer systems on which essential civilian infrastructure depends from the internet”
“it can be argued that the key to reducing risk is through reducing the extent of cyber vulnerabilities. An international regime to disclose vulnerabilities to the relevant hardware and software producers could have a significant impact on risk in and through cyberspace.“

It can be argued, by who exactly?

This is problematic on many levels. Full disclosure is a big discussion (indeed I invite you to seek expert voices considering this problem) but the sound of “regime to disclose…” puts a potential pressure on, who exactly? Security researchers? Or is it in principle limited only to government organisations?

Summary

An interesting and needed report. Such topics are not prominently represented in discussions happening at the top level of policy-making centers. The ICRC report will likely influence the discussion concerning cyberwarfare.

Did you like the assessment and analysis? Any questions, comments, complaints, or offers of engagement ? Feel free to reach out: me@lukaszolejnik.com