The Second Brain
How AI Might Support Soldier Decision-Making
Artificial intelligence is increasingly discussed as a transformative force in modern warfare, but much of that discussion risks overstating certainty. Rather than replacing soldiers or commanders, AI is more realistically emerging as a form of cognitive support, a “second brain” that might assist human decision-making in environments defined by uncertainty, speed, and information overload.
This framing matters. Military decision-making is not simply about processing data faster, it is about judgement under pressure, ethical responsibility, and contextual understanding. AI systems may help bridge the growing gap between the volume of information available on the battlefield and the limited cognitive bandwidth of human operators, but they are unlikely, and arguably unsuitable, to replace human decision-makers outright (1).
This article explores how AI might support soldier decision-making, where its strengths and limits lie, and why the human role remains central even as machines become more capable.
The Cognitive Burden of Modern Warfare
Modern battlefields generate data at a scale that far exceeds historical norms. Sensors, drones, satellites, cyber monitoring tools, electronic warfare systems, and networked platforms produce continuous streams of information. At the tactical level, soldiers and junior commanders must interpret this data while navigating physical danger, fatigue, and stress. At the operational and strategic levels, commanders face the challenge of integrating vast datasets into coherent plans under time pressure.
Chatham House highlights this widening cognitive gap as one of the core drivers behind military interest in AI (1). Human cognition evolved for relatively small-scale environments, not for processing thousands of data points in real time. AI systems, by contrast, are well suited to tasks such as pattern recognition, correlation across datasets, and rapid optimisation.
The idea of a “second brain” reflects this division of labour. AI may be able to absorb, filter, and structure information, while humans retain responsibility for interpretation, judgement, and action.
AI as Decision Support, Not Decision Authority
In most credible military applications, AI is framed as a decision-support tool rather than a decision-maker. This distinction is critical. Decision support systems aim to improve the quality and speed of human decisions by presenting relevant information, highlighting risks, and suggesting possible courses of action, without removing human agency (2).
For example, AI systems might fuse intelligence from multiple sensors to identify likely enemy positions, assess terrain constraints, or forecast logistical bottlenecks. They may present commanders with probabilistic assessments rather than definitive answers, helping humans understand trade-offs and uncertainties rather than dictating outcomes.
Research consistently suggests that this supportive role is where AI’s greatest near-term value lies. Attempts to delegate complex, morally charged decisions entirely to machines raise serious ethical, legal, and operational concerns, particularly in the use of force.
Shared Cognition and Human–Machine Teaming
One useful way to understand AI’s potential role is through the concept of shared cognition. Rather than viewing AI as a tool that replaces human thinking, shared cognition treats decision-making as distributed across human–machine teams (3).
In this model, AI systems might handle tasks such as data triage, anomaly detection, and scenario modelling, while humans focus on intent, context, and moral judgement. The combined system may perform better than either humans or machines alone, provided the interaction is well designed.
However, shared cognition is fragile. Poor interface design, opaque algorithms, or misplaced trust can undermine the relationship. Studies of human–machine interaction have shown that operators may either over-trust AI outputs, known as automation bias, or under-trust them, ignoring useful insights altogether (4).
Designing AI systems that calibrate trust appropriately, neither overstating confidence nor obscuring uncertainty, remains a major challenge.
Potential Applications at the Tactical Level
At the tactical level, AI might support soldier decision-making in subtle but meaningful ways. Wearable sensors and AI-driven analytics could help monitor cognitive load, stress, or fatigue, allowing commanders to better manage personnel and timing (4). AI-enhanced navigation systems might help dismounted troops interpret complex urban terrain or degraded GPS environments.
Target recognition systems could assist soldiers by flagging potential threats or anomalies in sensor feeds, particularly in cluttered or low-visibility conditions. Importantly, such systems would still require human confirmation, ensuring that lethal decisions remain under human control.
These applications do not remove the soldier from the decision loop. Instead, they aim to reduce cognitive strain, improve situational awareness, and support more informed choices under pressure.
Operational and Strategic Decision Support
At higher levels of command, AI may have an even greater impact. Operational planning involves balancing logistics, intelligence, timing, and political constraints across wide geographic areas. AI systems could help generate and test multiple courses of action, identify vulnerabilities, and simulate potential outcomes faster than traditional staff processes (5).
Recent defence initiatives suggest growing interest in AI-assisted planning tools that compress decision cycles and allow commanders to explore options dynamically. These systems may help decision-makers understand second- and third-order effects, particularly in multi-domain operations spanning land, sea, air, cyber, and space.
Yet even here, uncertainty remains central. AI models are only as good as their data and assumptions. Adversaries actively seek to deceive, manipulate, or degrade information systems, creating environments where algorithmic predictions may be unreliable (1).
Limits, Risks, and Human Factors
While AI might support decision-making, it also introduces new risks. One of the most persistent challenges is explainability. Many advanced AI systems, particularly those based on deep learning, struggle to provide transparent reasoning for their outputs. This can undermine trust and make it difficult for humans to challenge or validate recommendations (6).
Another concern is cognitive dependence. Over time, operators may come to rely on AI suggestions as defaults, potentially eroding independent judgement. This risk is especially acute in high-tempo environments where time pressure discourages critical reflection (3).
There are also technical limitations. Battlefield data is often incomplete, noisy, or deliberately manipulated. AI systems trained on peacetime or simulated datasets may perform unpredictably in contested environments. As a result, AI outputs should be treated as inputs to human reasoning, not authoritative answers (2).
Ethical and Legal Considerations
The ethical dimension of AI-supported decision-making cannot be separated from its technical aspects. International humanitarian law requires human responsibility for decisions involving the use of force. AI systems may inform those decisions, but accountability must remain with human actors.
Ethical frameworks increasingly emphasise the importance of human moral agency, ensuring that humans retain the ability and authority to make value-based judgements, particularly in ambiguous or high-risk situations.
This has implications for system design, training, and doctrine. AI systems must be designed with override mechanisms, uncertainty indicators, and clear boundaries on autonomy. Soldiers and commanders must be trained not just to use AI tools, but to question them.
A Cognitive Multiplier, Not a Replacement
The idea of AI as a “second brain” is compelling, but potentially misleading if taken too literally. AI does not think, understand, or judge in human terms. What it may offer is cognitive amplification, the ability to process more information, faster, and present it in ways that help humans think better.
The real challenge lies not in building more powerful algorithms, but in integrating them responsibly into human decision-making processes. Militaries that succeed will likely be those that treat AI as a partner rather than a substitute, investing as much in human training, doctrine, and ethics as in technology.
In future conflicts defined by speed, complexity, and uncertainty, decision advantage may hinge on how effectively humans and machines think together. The second brain may support the first, but it cannot replace it.
Also By Us:
References
Cummings, M. L. (2017). Artificial intelligence and the future of warfare. Chatham House.
https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings.pdfInternational Committee of the Red Cross. (2024). Artificial intelligence in military decision-making: Supporting humans, not replacing them.
https://blogs.icrc.org/law-and-policy/2024/08/29/artificial-intelligence-in-military-decision-making-supporting-humans-not-replacing-them/Royal United Services Institute. (2025). Human–machine teaming and shared cognition.
https://www.rusi.org/explore-our-research/publications/commentary/human-machine-teamings-shared-cognition-changes-how-war-madeScienceDirect. (2023). Developing AI enabled sensors and decision support for military operators in the field
https://www.sciencedirect.com/science/article/pii/S1440244023000397arXiv. (2024). Explainable AI for high-risk decision environments.
https://arxiv.org/abs/2411.09788Oxford Academic. (2024). Preserving human moral agency in AI-supported military decisions.
https://academic.oup.com/ia/article/102/1/63/8355995
