The path towards embracing assured AI use across NATO defence

Opinions and hybrid threats analysis from our associates

It is widely recognised that the US Department of Defence, the lead West military technology innovator, has been relatively slow to exploit the full defensive and offensive capabilities of Artificial Intelligence (AI), a clear revolutionary technology, comparable to the impact of the swept wing and jet engine. Given the widespread use of civilian tools such as ChatGPT or Deep Dream Generator, why is this? It was as recently as December 2020 that the US Air Force officially started to use AI in simulated co-pilot and mission commander roles. Although, one notable exception is the US Navy’s X47-B Carrier-based mission platform, unfairly criticised as just a controllable reusable missile launcher, as it is capable of taking the kill-decision, but with human oversight. So what are the key hurdles challenging to logistics and procurement, common to US, UK, and NATO military-civilian technology transfer?
There are obvious investment opportunities for both Armed Forces, suppliers, and other institutional stakeholders and investors who recognise the game-changing capabilities of emerging technologies alongside the US’s previous track record of developing technologies which profoundly impact the changing character of warfare. The defence and civilian sector is expressing apparently surprising caution in embracing such technology from a mission control perspective. To understand this position, we should consider the key challenges posed by mission command and control, and the potential weakness of such AI-mediated technology. Clear benefits exist, such as reduced time for data-heavy tasks, consistency in results (operator independent), but negatives exist; development of AI tools is expensive, there is currently little available military trained expertise, and the current generation of AI systems cannot step out in function beyond their trained tasks.
The common-held US position is that although the 2020 simulated USAF-mission went according to plan, the main hurdle preventing AI being rolled out more widely across defence is the fear AI can be hacked or jammed by enemy tactics. In the UK defence industry, AI is now an acknowledged part of our national defence strategy whilst military intelligence agencies also enthusiastically embrace the new technologies to ‘liberate’ increasingly prioritised human workforces from more mundane large volume data crunching tasks, to provide supervised and advanced decision-making where alert messages are raised by the AI data management system. Where AI systems currently work best is processing large volumes of digital data with the application of several algorithms, looking for multiple component correlations to the system’s principal component analysis to provide output with high confidence regarding data quantitative analysis.
In some cases, a human may not even spot the faint digital correlations, whilst the AI can improve the decision-making process and significantly decrease processing times, making fewer errors – a clear benefit in time-limited tactical battlefield military-focused mission objectives. Furthermore, simulation is to some degree agnostic of mission theatre environment, whether land, space, sky, sea or sub-sea. This foreshortening of data analysis time is found in earth observation space image analysis where we have seen analysis improvements regarding archive-retrieval take enormous strides forward; for instance, 90-years equivalent of required man-hours analysis five years ago can now be achieved in less than a single day.
Consequently, in a partnership between military and industry, at the infancy of AI introduction, in terms of targeting the best areas to prioritise funding, the greatest degree of military reliability and immunity from future warfare adversarial tactics is vital. The first overall tactical use of such technologies will inevitably generate the first counter-AI technologies, according to past warfare patterns. The move from analogue to digital jamming in electronic warfare generated its own counter-jamming methods. Military ‘game changers’ are historically useful at first application, especially in short-timescale, small war environments, but are less good in more attritional environments over large time-scales and widely varied geographical landscapes, and in any case are rarely used on their own to achieve tactical victory. One recent example is the use of armed military, and weaponised civilian drones, used by Azeri Moslems in the Christian enclave of Nagorno-Karabakh. The sudden ‘blitzkrieg’ which started on 27 September 2020 was devastating on that community, with the Azerbaijani offensive making significant gains, with much less attention than has been focused more recently on the Israel-Gaza conflict, using a combination of surveillance and targeting drones to maximise impact. A repeat of such tactics today would likely be much less effective due to the rapid evolution of counter-drone technologies even in the last three years.
But how large is the AI market and its application to the military?
The AI market was recently valued at $81 billion US dollars in 2022 with recent progress in algorithms and machine learning (ML) alongside faster semiconductor chips increasing processing ‘power’ whilst decreasing required computational time and real power consumption, improving AI’s track record in solving real-world problems. The market is expected to increase another 34% during the timeframe 2022–2030, crossing the US $100 billion threshold in both business and military areas. Certainly AI’s impact on how decisions are made and implemented will have greater impact in the civilian world than in the military and, as such, minimising security safe guarding risks to civilian infrastructure data management systems (e.g., civil aviation authorities, banking, control of power grid systems) must also be addressed adequately. Many business executives believe they will need to use AI to achieve their own growth objectives within an ‘AI-weaponised’ space filled with competing customers, with many common counter-AI problems understood by the military. In addition, the military and top business executives struggle, not having the technical background themselves, to know how best to develop and implement AI within their own business models. For commercial and military success, it is important for the AI benefits to outweigh the deficiencies. The benefits are attractive and clear: reduction of time required for intensive data-heavy tasks saving human time, labour costs, reducing work dissatisfaction, yet increasing productivity, whilst algorithms provide quantified statistical output and are always available even at weekends and confidence to decision-making in qualified tasks with clear confidence in successful boundaries, e.g. image recognition for a mission objective to an acceptable threshold level.
However, the biggest downside of such technology is the currently limited supply of skilled computer staff who can generate and train required system algorithms from scratch, as there are few current available ‘off-the-shelf’ packages. The analogy of digitising the analogue battle space is a weak one. Most of the principles and underlying methodology are the same in both analogue and digital spaces. It is the precise methodology and flexibility of digital techniques that make them so effective and fast. And in many digital fields that have expanded today (e.g., space imaging), going from the relatively limited number of spectral bands with Landsat to hyper spectral ones such as AVIRIS is well understood from pre-existing known band tools rather than entirely new ones.
However, the military is charting new territory in the digital generative AI market, and is best placed to first exploit areas where it is has already at least embarked upon its journey, namely cyber and large-scale real-time analysis of electronic surveillance of electromagnetic signals and telecommunications traffic. As such, DOD acquisition policy would be wise to prioritise establishing overall AI governance and AI procurement guidance for negotiating with private companies, as the DOD is unlikely to have the necessary specialist in-house capabilities to provide the required algorithmic solutions in the timescale developing objectives will require. Success will depend to a great degree on the specific smart goal objectives set but also on development of cross-industry research initiatives to impact future applications of these technologies and others. It is no accident that AI is often closely connected with synergistic growth in the Internet of Things (IOT) and quantum computing. Future application of IoT will be wide-ranging and include civilian health monitoring, augmented reality (AR), gaining situational awareness with drones, vehicle management, target recognition, etc. For instance, smart sensors can be used on military equipment to give data on their ‘health’ and whether maintenance is required; this reduces operating costs and ‘down-time’ for military equipment as sensors predict when breakdown is imminent, technology already to be found on systems such as Patriot. However, IoT risks are not only in physical hardware, as Patriot, but connectivity, data layers, all Apps, and data services. The interconnected nature of IoT interoperability increases the potential access points for terrorists and hijackers, which means any attack could have devastating consequences, especially to early-stage military IoT lead adopters such as BAE Systems, Lockheed Martin, and Raytheon at this stage. This is a big deal, as the largest share of IoT-related patent filings in the defence, security and aerospace industry in the third quarter of 2023 was the US (59%), then South Korea (18%) and China (6%), the share represented by the US was 15% higher than the 44% share it accounted for in the second quarter of 2023.
Recent developments by a DARPA-funded US-led team at Harvard, building upon their very first Quantum Computing efforts in 1995, have successfully developed a hybrid concept to combine ‘noisy’ or error-prone quantum processors with classical systems to focus on solving optimisation problems of interest to defence and industry using qubits. Quantum computing uses the laws of quantum mechanics to solve problems too complex to solve with classical computers, achieving better results than classical computers in the same amount of time, with applications in military sensing, communications, navigation, computing and electronic warfare. In its turn, the UK Ministry of Defence published its Quantum Strategy in March 2023 with The UK Minister of the Armed Forces, James Heappey, speaking at the 2023 Global Air & Space Chiefs’ Conference in Gloucester, UK, emphasising the importance of Quantum Computing to the RAF:

When those computers, instead of computing ones and zeros, are computing on atoms, then the vastness of the noise of the ocean, or the vastness of the business of the skies or the vastness of everything that’s happening with a human population on land can be understood and crunched by computers that are working at a speed that we can’t imagine. It will enormously change what our armed forces can do. And we’ve got to be ready to spiral that into our machines when that moment comes.

James Heappey, UK Minister of the Armed Services
Source: Aviation Week Network
In conclusion, AI will provide immediate benefit to cyber security, surveillance threat detection, combat mission simulation, and supply chain management (a big civilian overlap here), but other higher risk areas such as arising from counter AI drone technologies to drones, and other autonomous systems, will be further down the development roadmap until the overall security risks of as yet unassured AI technology can be minimised.

Hybrid threats analysis

The CSS aligns subject matter experts from academia, government and industry to counter threats to security globally.
We aim to deliver detailed multidisciplinary analysis of the motivation, actions and locations of groups and individuals seeking to undermine the stability of democratic governments, the legitimacy of global business and the efficacy of international charities.
Students and academics across the University and beyond are welcome to contribute their perspectives, comments, and analyses.
A woman using a laptop to access information on global shipping security and logistics