Contemporary debates surrounding military artificial intelligence (AI) frequently centre on the prospect of fully autonomous weapons systems acting beyond human control. While such concerns are not unfounded, they risk obscuring a more immediate transformation already underway. Across modern militaries, AI systems are increasingly embedded in intelligence analysis, surveillance, and operational planning. Their defining feature is not autonomy, but acceleration.
Human commanders and political leaders retain formal authority over the use of force. Yet AI reshapes the conditions under which that authority is exercised. By filtering vast quantities of data and generating confidence-ranked recommendations at speed, these systems compress deliberative time and narrow the space for political contestation. Options once debated amid acknowledged uncertainty are increasingly framed as time-sensitive, data-driven imperatives. As AI becomes normalised within military institutions, it redefines what counts as reasonable delay, responsible action, and acceptable risk. Political leaders still weigh trade-offs between strategic advantage, civilian harm, legal constraints, and long-term consequences, but they do so within decision environments structured to privilege speed and rapid response. To understand the implications of this shift, we must move beyond the language of autonomous weapons and examine how AI functions within existing command structures.
AI as Decision Infrastructure, Not Autonomous Actor
Much of the language used to describe AI warfare suggests a future dominated by autonomous machines. At present, however, the more consequential development lies in the integration of AI into decision-support systems that shape how human actors perceive and interpret situations. The US Department of Defense’s Project Maven, for example, was designed to use machine learning algorithms to analyse drone imagery and flag objects of interest for human analysts. While humans remained formally in the loop, the system dramatically increased the pace at which targeting information was produced and assessed, altering the tempo of operational decision-making.
Similar dynamics have been reported elsewhere. Investigations into the Israel Defense Forces’ use of AI-assisted targeting tools in Gaza describe systems that generate large volumes of potential targets by cross-referencing surveillance data, communications metadata, and behavioural patterns. These systems do not independently authorise strikes, but they accelerate the identification and presentation of targets for human approval. Critics argue that this acceleration can strain meaningful review: when analysts and commanders are presented with extensive target lists under time pressure, there may be less opportunity to scrutinise the underlying data, assess the reliability of algorithmic inferences, or carefully weigh anticipated military advantage against potential civilian harm. In this context, concerns about proportionality stem not from autonomous decision-making as such, but from the risk that speed, scale, and confidence scores attached to AI-generated targets may narrow the space for cautious, case-by-case judgment and effective oversight.
In each case, AI functions less as an autonomous decision-maker than as a form of decision infrastructure embedded within existing hierarchies of authority. It shapes which options appear salient, urgent, or legitimate, often before political debate has time to occur.
If AI reshapes how options are generated and prioritised, it also reshapes how responsibility is experienced. The political significance of this acceleration lies in its effect on responsibility. War has always involved uncertainty and delay, and these frictions have historically served as sites of accountability. Hesitation allowed leaders to weigh consequences, justify actions, and absorb dissent. AI does not eliminate responsibility, but it alters how it is experienced. When systems present recommendations framed as high-confidence assessments, disagreement can appear reckless, and delay can be recast as incompetence.
This dynamic is particularly visible in emerging doctrines that explicitly institutionalise speed, such as the US military’s Joint All-Domain Command and Control (JADC2) initiative, which seeks to integrate data from multiple domains into a single operational picture. AI systems play a central role in prioritising information and generating rapid response options, with the explicit goal of accelerating the “kill chain” from detection to action according to the Congressional Research Service briefing. While such systems promise efficiency, they also institutionalise expectations of speed that are difficult to resist once established. It is against this backdrop of accelerated yet formally human-controlled systems that calls for prohibition must be assessed.
Prohibition Is an Inadequate Response
In response to these developments, calls to ban autonomous weapons have gained prominence. In this context, “autonomous weapons” generally refers to systems capable of selecting and engaging targets without direct human intervention at the point of use, rather than AI-enabled tools that generate recommendations for human decision-makers. Advocacy groups and some states argue that prohibiting such fully or partially autonomous lethal systems is necessary to preserve meaningful human control over the use of force. These efforts have helped foreground ethical and legal concerns about delegating life-and-death decisions to machines. However, they offer more limited guidance for navigating the realities of contemporary military AI, much of which operates not by replacing human judgment outright, but by shaping how human actors interpret information, prioritise options, and exercise their authority.
AI is not a discrete weapon that can be easily prohibited. It is a general-purpose technology embedded across logistics, intelligence, surveillance, and planning systems. As analysts at the UN Institute for Disarmament Research have noted, most military AI applications are dual-use and deeply integrated into broader command architectures, making verification and enforcement of categorical bans extremely difficult. The same machine-learning techniques used to identify military targets may also be used for disaster response, border monitoring, or cyber defence.
Yet the limits of prohibition are not only practical but conceptual. Even if fully autonomous lethal systems were successfully restricted, the most immediate transformations in warfare would remain. AI-driven decision-support tools already shape targeting processes, threat assessments, and operational tempo while keeping humans formally “in the loop.” The political and ethical risks discussed above arise within these hybrid arrangements. For that reason, the more pressing governance challenge is not solely preventing machines from replacing human decision-makers, but ensuring that human control remains substantively meaningful when exercised through AI-mediated systems. Prohibition addresses one possible endpoint of automation; it does not by itself resolve how authority, responsibility, and judgment are reconfigured in the systems that are already in widespread use.
Reintroducing Friction as a Form of Governance
The need to govern acceleration is not merely theoretical. Scholars examining AI-driven compression of decision cycles that can outpace deliberation itself have warned that decision-support systems increasingly shape how states recognise threats and determine whether the use of force is judged necessary or avoidable.
One possible response is to introduce institutional friction deliberately into AI-enabled decision-making processes. This may include mandatory human review at key escalation thresholds, structured delays for particularly high-risk decisions, or independent teams tasked specifically with challenging and stress-testing algorithmic recommendations before they are acted upon. Such measures do not reject the use of AI outright; rather, they seek to ensure that political judgment retains the temporal space necessary for responsibility, justification, and restraint.
Furthermore, this challenge cannot be resolved through technical design alone. The maintenance of meaningful political control requires institutional cultures that value restraint as a marker of judgment rather than inefficiency, as well as political leadership prepared to absorb the strategic and reputational costs associated with deceleration. In the absence of these conditions, systems that nominally preserve human oversight may nonetheless entrench accelerated decision-making environments in which political authority is increasingly symbolic. These governance challenges do not arise only within states – they also shape how states relate to one another.
Transparency, Trust, and Strategic Stability
Managing AI competition between states presents additional challenges. While secrecy may confer strategic advantage, total opacity increases the risk of miscalculation and escalation. Historical experience with nuclear arms control suggests that stability can be strengthened not through full transparency, but through shared norms and confidence-building measures that reduce uncertainty about intentions.
Proposals from organisations such as the OECD and NATO emphasise principles of accountability, reliability, and meaningful human oversight, seeking to establish shared expectations around how AI will be developed and used rather than requiring the disclosure of sensitive technical details. In practice, such norms include commitments to rigorous testing and validation procedures, clear chains of human responsibility for operational decisions, auditability of AI systems, and safeguards to ensure compliance with international humanitarian law. Similar confidence-building measures, such as transparency about doctrine, communication channels during crises, and assurances that humans retain authority over the use of force, could help mitigate fears of sudden, automated escalation, particularly in high-pressure situations where misinterpretation and rapid response cycles heighten the risk of conflict.
Finally, AI warfare raises questions not only about inter-state stability, but about democratic accountability. As conflict becomes increasingly mediated by algorithms and fought at a distance, it risks fading from public view. Reduced visibility can weaken oversight, particularly in democratic systems where institutional and public scrutiny has at times constrained the use of force. Acceleration compounds this problem, as compressed decision cycles leave less time for external scrutiny or retrospective review. For example, legislative approval requirements for military action, parliamentary inquiries into civilian casualties, judicial review of detention and targeting practices, and investigative journalism have all shaped how governments conduct operations, even if they do not intervene in individual strike decisions. When targeting processes become more technical, data-driven, and classified, it becomes harder for legislators, courts, and the public to assess how decisions are made, potentially narrowing the space for meaningful democratic debate about the conduct and scope of military campaigns.
Empirical research on public trust, including surveys conducted by the Pew Research Center, suggests that legitimacy depends not only on outcomes, but on transparency and perceived responsibility. If decisions about violence are increasingly shaped by technical systems that operate beyond public understanding, democratic consent becomes harder to sustain.
Navigating AI warfare therefore requires renewed attention to civilian oversight, parliamentary scrutiny, and public debate. Political control cannot be preserved solely within military institutions. In many democratic states, mechanisms such as legislative authorisation for the use of force, defence committee hearings, independent inspectors general, judicial review, and public reporting requirements have provided at least partial checks on executive and military power. Controversies over drone strikes, surveillance programmes, and detention practices, for instance, have prompted court cases, parliamentary investigations, and policy reforms that reshaped operational frameworks without dictating individual battlefield decisions. Ensuring that AI-enabled military systems remain subject to comparable forms of review, through transparency about doctrine, clear lines of accountability, and opportunities for external audit, would help maintain the principle that decisions about the use of force ultimately remain matters of political judgment rather than purely technical administration.
Acceleration as a Political Choice
The central issue in governing AI is not autonomy alone, but the governance of speed itself. AI warfare is often described as an inevitable consequence of technological progress. This framing is misleading. While AI is transforming how militaries operate, it does not determine how political authority responds. The central risk is not runaway autonomy, but the gradual normalisation of acceleration that erodes deliberation, accountability, and restraint.
Navigating AI warfare is therefore a political task. It requires recognising speed as a source of power that must itself be governed and insisting that political judgment remain meaningful even within decision-making environments designed for rapid response. The question is not whether humans retain formal involvement in these processes, but whether political authority can continue to exercise control over decisions increasingly shaped by machine-generated assessments.
If political control erodes, it will not be because AI supplants human authority; it will be because political institutions fail to preserve the time, scrutiny, and independence necessary for genuine judgment.
Anisha Mohammed is an undergraduate reading Philosophy, Politics and Economics (PPE) at the University of Oxford. She is interested in how emerging technologies reshape political power, particularly in the context of international relations and security.

