Original

AI and Modern Wars: How the 2026 US‑Israeli Strikes Rewired the Kill Chain

AI and Modern Wars: How the 2026 US‑Israeli Strikes Rewired the Kill Chain

Sheng-Hsiang Lance Peng

War has always been a theatre of speed, strategy and chaos but today, the performers are changing. No longer confined to generals, pilots and analysts, artificial intelligence has marched into the war room and pirouetted across screens and networks with a poise that is terrifying and exhilarating. As Craig Jones, a lecturer at Newcastle University, notes, AI now compresses the “kill chain,” shrinking the time from target identification to destruction in ways almost unimaginable in previous conflicts. In essence, bombs can drop faster than the speed of thought.

—bombs can drop faster than the speed of thought—

The 2026 US-Israeli strike on Iran, which reportedly led to the death of Ayatollah Ali Khamenei, offers an illustration of this new landscape. According to Jones, such a coordinated operation “would have been impossible, or almost impossible, to do in that way” without AI, highlighting the technology’s role as an aide and as a prime mover in military decision-making. The Pentagon’s rapid pivot from Anthropic’s Claude to OpenAI’s models and xAI’s Grok demonstrates the strategic value of these tools: they are no longer experimental toys but instruments of existential consequence.

From an information science perspective, what is striking is the velocity and volume of data AI systems ingest and interpret. Historically, the US Air Force benchmarked decision-making against the “speed of thought,” a standard that might have taken months from reconnaissance to bombing in the Second World War and Vietnam. Today, terabytes of aerial imagery, electronic signals, human intelligence and social media chatter are parsed almost instantaneously. This is a textbook case of what Floridi (2016) terms the “infosphere,” a hybrid domain where human and artificial agents co-produce knowledge. In the battlefield context, the infosphere is lethal: every byte can translate into a strike.

AI’s influence extends beyond analytics. The OODA loop (observe, orient, decide, act) traditionally described in military theory by Boyd (1987) has become a turbocharged feedback loop under AI. Observation is automated through satellite and electronic data interpretation. Orientation and decision-making are accelerated by predictive algorithms that weigh thousands of scenarios in milliseconds. Action is often executed via autonomous drones, which can function without human oversight when signals are jammed. In other words, AI is both tactician and executor and blurs the line between planning and combat.

Yet speed is not synonymous with certainty. Jones warns that AI in warfare “multiplies, by orders of magnitude, the degrees of error.” Errors are inevitable even in human-led operations but AI amplifies them. A misclassified target, a misinterpreted signal or a glitch in the data pipeline can cascade into catastrophic consequences. This aligns with Perrow’s (1984) concept of normal accidents, where systems inevitably produce failures due to tight coupling and interactive complexity. The modern AI-enabled war room, with its streams of data and automated responses, exemplifies such tightly coupled knottiness on a global scale.

Ethics and accountability form another knot in this algorithmic skein. Amir Husain points out that international law still requires humans to bear responsibility for battlefield decisions, yet AI obfuscates the locus of agency. This raises questions akin to Latour’s (1992) actor-network theory: if both human and machine are actors, who bears moral and legal responsibility when a misfire occurs? Moreover, the ethical calculus is utilitarian: should speed and efficiency outweigh human oversight or does each millisecond saved risk a disproportionate moral cost? These are questions that philosophers from Kant to Singer have wrestled with in analogue domains; in digital theatres of war, they acquire a chilling immediacy.

The theatre is not confined to kinetic strikes. Cyberwarfare has become a playground for AI’s prodigious abilities. Iran, long a sophisticated cyber actor, now experiments with AI-assisted attacks. Using open-weight models like Meta’s Llama or Chinese equivalents such as DeepSeek, Iranian groups can run autonomous reconnaissance, craft convincing phishing campaigns and adapt malware at scale: all without top-tier programmers for each operation (Walter, 2026). Information science concepts such as algorithmic agency and distributed cognition illuminate this dynamic: the system itself becomes an agent, coordinating across networks, detecting vulnerabilities and iteratively learning from responses.

The implications extend globally. AI is now deployed across multiple theatres: autonomous drones over Ukraine, target identification in Israel-Hamas operations and cyber-enabled espionage across infrastructure networks (Mellen, 2026). Each deployment transforms traditional understandings of war: speed trumps deliberation, algorithmic inference supplants human judgement and complexity scales non-linearly. This resonates with the ideas of Hayles (1999) on distributed cognition and the posthuman condition: intelligence is decoupled from a single mind and instantiated across networks of humans, machines and data.

Information theory provides further insight. Shannon’s (1948) conception of information as reduction of uncertainty acquires a lethal gloss here. AI is used to minimise uncertainty about enemy positions, communications and vulnerabilities, converting raw data into actionable knowledge. Yet, the paradox is that in compressing uncertainty, AI also compresses the margin for error. The faster the system acts, the less time there is for human reflection, ethical deliberation or error correction: a chilling realisation for scholars and practitioners alike.

This fusion of data, computation and kinetic power also challenges classic security studies paradigms. Traditional deterrence theory, based on rational actors and predictable outcomes, struggles to account for autonomous, adaptive AI systems (Waltz, 1979). If AI can outpace human decision-making, the strategic calculus shifts: misperception, misalignment or hacking could cascade faster than diplomatic or military intervention. AI then transforms the very ontology of conflict and makes war simultaneously more precise and more unpredictable.

At the same time, the rise of AI in conflict foregrounds information asymmetries. The side with superior data processing capabilities, access to open-weight AI models or advanced cyber infrastructure gains an unprecedented advantage. Yet these asymmetries are mutable: open-source AI lowers the barrier for state and non-state actors alike, enabling sophisticated operations from unexpected quarters. This resonates with Boyd’s notion of the OODA loop again: the side that can process, act and adapt fastest wins; not necessarily the one with superior numbers or traditional firepower.

Information science illuminates the mechanics and the implications: AI war systems are not neutral tools but cognitive amplifiers, magnifying human capacities and errors alike. They redistribute agency across networks, compress decision times to near-instantaneous scales and challenge ethical, legal and strategic orthodoxies simultaneously. As Floridi (2010) notes, in the infosphere, knowledge and power are inseparable and in the modern war room, knowledge is lethal.

Yet there is an odd beauty in the midst of these convolutions. AI does not tire, complain or panic. It executes the OODA loop with the precision of a mathematician dancing across matrices of probabilities. Its decisions, however morally fraught, are faster than any human general or analyst could ever hope to be. But therein lies the rub: speed without reflection, efficiency without empathy and autonomy without accountability may yield victories that are strategically brilliant but ethically bankrupt.

The integration of AI into modern warfare is no longer a future scenario; it is now. From Iran to Ukraine, from autonomous drones to AI-assisted cyberattacks, the theatre of war has been transformed into a network of human and machine cognition. Information science offers a lens to understand this transformation and highlights the velocity, volume and volatility of data-driven decisions. Yet it also underscores the risks: errors multiply, accountability blurs and the ethical stakes are immense.

As we move forward, scholars and policymakers alike must deal with a chilling realisation: in the era of AI and modern wars, machines may well decide who lives and dies. And our challenge as humans is to ensure that ethical safeguards prevent AI from causing unintended chaos.

Cite this article in APA as: Peng, S-H. L. (2026, March 25). AI and modern wars: How the 2026 US‑Israeli strikes rewired the kill chain. Information Matters. https://informationmatters.org/2026/03/ai-and-modern-wars-how-the-2026-usisraeli-strikes-rewired-the-kill-chain/

Author

  • Sheng-Hsiang Lance Peng

    Dr Peng is a Cornwall-based researcher (Falmouth/Exeter). His research explores a phantasmagoria of marginalised experiences through eerie and unsettling lenses including hauntology (Derrida), monster culture (Cohen) and mnemohistory (Assmann) to reflect on the cultural and social conditions shaping them.

    View all posts

Sheng-Hsiang Lance Peng

Dr Peng is a Cornwall-based researcher (Falmouth/Exeter). His research explores a phantasmagoria of marginalised experiences through eerie and unsettling lenses including hauntology (Derrida), monster culture (Cohen) and mnemohistory (Assmann) to reflect on the cultural and social conditions shaping them.