Home : News : Display
May 29, 2024

Raven Sentry: Employing AI for Indications and Warnings in Afghanistan

Thomas W. Spahr
©2024 Thomas W. Spahr

ABSTRACT: This article examines Raven Sentry, a project that employed artificial intelligence to provide advance warning of insurgent attacks in Afghanistan. During 2019 and 2020, the Resolute Support Deputy Chief of Staff for Intelligence (J2) benefited from a command culture open to innovation, the urgency created by the US drawdown, and a uniquely talented group of personnel that, aided by commercial sector experts, built an AI system that helped predict attacks. The war’s end cut Raven Sentry short, but the experience provides important lessons on AI and the conditions necessary for successful innovation.

Keywords: artificial intelligence, Afghanistan, military intelligence, innovation, culture


Historian A. J. P. Taylor argued that “war has always been the mother of invention.” This statement is commonly associated with the advent of the tank during World War I or the atomic bomb in World War II but is no less true of the wars in Afghanistan and Iraq in the twenty-first century. Soldiers, sailors, airmen, and marines innovated throughout these conflicts, including with artificial intelligence (AI). As US and NATO forces began to draw down in Afghanistan, the Deputy Chief of Staff for Intelligence (J2) sought ways to maintain awareness and advance notice of enemy attacks. The command culture throughout the Resolute Support headquarters in Kabul was particularly open to testing emerging concepts, and the intelligence team consisted of a unique group of personnel who understood the promise of AI and had a network of contacts throughout the Department of Defense (DoD) and in the commercial sector who could help.1

Under pressure to solve the growing challenge of maintaining awareness with fewer intelligence resources, the Resolute Support team developed an AI model called Raven Sentry using only unclassified data sources to predict future attacks on Afghan district and provincial centers. Raven Sentry began operating in late 2020, but the US withdrawal from Afghanistan in 2021 cut the experiment short. In that brief time, the project demonstrated how AI could benefit military analysts working in a coalition environment with access to large volumes of sensor data. As an active participant in the project, I witnessed several valuable lessons and believe the case study presented here can help leaders understand the potential value and challenges of employing AI and the organizational conditions necessary for successful innovation during future conflicts.

The Problem

During 2019 and 2020, US and coalition forces decreased the number of military personnel in Afghanistan as part of their exit strategy. Over the previous 18 years, the coalition developed a robust human intelligence (HUMINT) network throughout Afghanistan that would be nearly impossible to maintain without ground forces. Further, intelligence units could “soak” areas at risk of attack with aircraft-mounted collection platforms stationed in Afghanistan and warn local forces of pending attacks. As the drawdown accelerated, touch points with the population decreased, intelligence-gathering aircraft relocated to higher-priority regions of the world, and fewer analysts were available to process information. Consequently, maintaining awareness of events in many regions became more difficult. Insurgents exploited the degraded intelligence collection and analytical capabilities to attack government centers, generating press attention that undermined the Government of the Islamic Republic of Afghanistan’s credibility. Except for the seven-day reduction in violence related to the peace agreement signed in February 2020, insurgent- initiated violence exceeded the norms during late 2019 and throughout 2020.2

During late summer and fall of 2019, as the United States neared a withdrawal agreement with the Taliban, intelligence officers at the Resolute Support headquarters and the Special Operations Joint Task Force-Afghanistan (SOJTF-A) sought ways to maintain situational awareness with fewer analysts and collectors. Around the same time, members of the Intelligence Community (IC) contacted Resolute Support and informed the intelligence leadership that the IC was making progress in developing AI-enabled warning models that could create efficiencies in the US Forces Afghanistan analytical processes.

Shortly thereafter, the intelligence team assessed that a well-designed and trained AI model could recognize insurgent patterns and predict future attacks by processing open-source intelligence (OSINT) surrounding these events. The Resolute Support J2 (intelligence) leadership sensed the emerging challenge of maintaining awareness and directed the analytical team to explore how to develop this AI-enabled capability.

AI, the Military, and Intelligence

Artificial intelligence is rapidly changing the world and could revolutionize warfare. General Mark A. Milley, former Chairman of the Joint Chiefs of Staff, recently argued, “Today, we are witnessing another seismic change in the character of war, largely driven by technology.” He went on to cite “[l]ow-cost automation platforms, coupled with commercial imagery and behavior tracking data augmented by artificial intelligence (AI) and analysis tools,” as central to this change. Although narrow in scope (the AI focused on high-profile attacks on district and provincial centers), Raven Sentry provided important groundwork for the type of AI development Milley referenced.3

Much of the military’s current research focuses on increasing the speed of the sensor-to-shooter link, or the period from when US forces collect intelligence on a target to the arrival of lethal effects. The 2023 DoD Data, Analytics, and Artificial Intelligence Adoption Strategy identifies “battlespace awareness and understanding” and “[f]ast, precise, and resilient kill chains” as two of its five decision advantage outcomes. Raven Sentry was an early attempt to achieve these goals by increasing intelligence analysts’ speed and efficiency at processing large volumes of information by employing an AI algorithm that could predict future attack locations.4

The team developing Raven Sentry was aware of senior military and political leaders’ concerns about proper oversight and the relationship between humans and algorithms in combat systems. Experts continue to debate the necessity and degree to which humans must be “in the loop” when making decisions on managing tasks, allocating resources, or, most importantly, releasing weapons. Early AI prototypes for intelligence, such as the Algorithmic Warfare Cross-Functional Team (Project Maven) established in 2017, were narrow in scope, meaning they solved a specific problem. For Maven, innovators enhanced analysts’ ability to process large volumes of imagery data using object-recognition software. Humans remained central to the process. Raven Sentry used environmental factors, open-source imagery, news reports, and social media posts to predict areas at risk of insurgent attack, which would then focus analysts’ attention on that region. Like Maven, it focused on increasing the efficiency of intelligence analysts trying to solve a specific problem. It was human-machine teaming with humans making decisions.5

A 2018 Center for Strategic and International Studies report identified a friendly organizational “ecosystem” as necessary for successful AI innovation. As intelligence leaders contemplated investing in an AI system in Afghanistan, they were concerned that the ecosystem in military units was unconducive to this type of experiment. A healthy ecosystem includes the digital infrastructure to support the processes, a culture committed to building trust between humans and technology, and a skilled workforce that understands AI. If the right talent is not present, individuals are closed to the idea that an algorithm can increase their efficiency, and leaders are unwilling to tolerate experimentation and change, then AI tests are doomed to fail. NATO’s Resolute Support intelligence leaders questioned if the culture would tolerate early failures and if they could assemble the necessary talent. As such, they cast a wide net for talent across the task force and looked to the commercial sector for help. Finally, intelligence leaders sought and found an environment conducive to experimentation within the Special Operations Joint Task Force-Afghanistan (SOJTF-A).6

Besides organizational culture and technological talent, the team encountered other obstacles common to AI experiments. The data curation challenge throughout Raven Sentry’s development was only overcome by limiting the algorithm’s geographic focus and dedicating considerable time to data curation early on. Difficulty with data formats, particularly when attempting to ingest a variety of information, is a regular theme of AI application studies. In 2018, Cortney Weinbaum and John N. T. Shanahan argued, “Future intelligence tradecraft will depend on accessing data, molding the right enterprise architecture around data, developing AI-based capabilities to dramatically accelerate contextual understanding of data through human-machine and machine-machine teaming” Weinbaum and Shanahan also predicted OSINT would become the prevalent form of intelligence in the future. In early 2020, the innovation team in Afghanistan witnessed these predictions come to fruition.7

Developing Raven Sentry

Collecting a skilled workforce was a top priority as the Resolute Support team explored an AI solution to mitigate the drawdown’s effects. Intelligence leaders decided early on to consolidate efforts and searched the task force for data-savvy personnel. In late 2019, the intelligence leadership assembled an innovation team at the special operations headquarters, where the culture seemed friendliest to experimentation, and the unit seemed willing to tolerate early failures. The Special Operations Joint Task Force-Afghanistan (SOJTF-A) commander and senior intelligence officer were deeply interested in artificial intelligence and willing to expend resources to experiment. After relocating several analysts to the SOJTF-A headquarters, the team affectionately dubbed the talented innovation office the “nerd locker.” The SOJTF-A leaders required that these team members pull shifts on the operations floor. This integration attuned the analysts to operational needs and built trust with those who eventually executed missions using Raven Sentry’s reports. As the experiment gained momentum and pressure increased from the pending drawdown, senior SOJTF-A leadership recognized the AI experiment’s potential and directed resources and prioritization of manpower to its development.

This new model required a deep understanding of insurgent behavior. The first step was to develop a detailed event matrix for district and provincial center attacks. Most team members had served repeated tours in Afghanistan over the 18-year conflict and were aware of patterns that could help predict insurgent attacks. For example, one analyst built attack templates with Lester Grau at the US Army’s Foreign Military Studies Office, which then helped train units deploying to Afghanistan. They based these templates on recurring patterns of attacks dating back to experiences with Russia during the 1980s. The team found it could reliably predict when insurgent activity would occur based on static or repeating factors (such as weather patterns, calendar events, increased activity around mosques or madrassas, and activity around historic staging areas) and influencing factors (such as friendly forces’ behavior, activity at Afghan National Police bases, and civilians closing markets early or avoiding mosques). In some cases, modern attacks occurred in the exact locations, with similar insurgent composition, during the same calendar period, and with identical weapons to their 1980s Russian counterparts.

During this process, the Foreign Military Studies team observed that they could predict larger-level attacks (for example, attacks on a district center) by tracking a series of events happening close together in time. The challenge was that these warning signals were widely distributed, faint, and typically imperceptible to current sensors and analytic tools.

By 2019, the digital ecosystem’s infrastructure had progressed, and advances in sensors and prototype AI tools could detect and rapidly organize these dispersed indicators of insurgent attacks. The identification process for additional indicators included discussions with Afghan military personnel who provided cultural context and warning signatures not always evident to non-Afghans. Further, there was 18 years’ worth of historical OSINT data in the national databases to conduct initial training and testing of the model.

Even with the expertise gathered at the Special Operations Joint Task Force, the team quickly determined it would need commercial-sector support to design and deploy an AI-enabled warning system and curate data into a usable format.

Technological advances in business and university experiments simply outpaced military expertise. Using their professional networks in the commercial sector, and with help from the Defense Innovation Unit in Silicon Valley, the innovation team identified an industry partner capable of developing the model.

In late 2019, US Forces Afghanistan leadership agreed to fund the AI experiment, including the cost of engineers from the commercial sector.

The Silicon Valley Defense Innovation Unit helped contract a team of engineers. Convincing the intelligence community in Washington to support the project was a larger challenge. Some critics questioned the value and technical approach as the Afghanistan conflict drew to a close. Others cited bureaucratic reasons, including the rapid-contracting approach and classification concerns of working with uncleared civilian engineers. Finally, familiar concerns about using AI in combat systems emerged, and there were questions over who would control the development, how units would use the outputs, and who was authorized to approve the model’s deployment. Support from top commanders in the Resolute Support headquarters and senior SOJTF leaders eventually overcame these objections, but not without several briefings and high-level phone calls late at night in Afghanistan.

From the start, the analysts in Afghanistan decided to use only unclassified inputs so the uncleared engineers could work with the data, and so the team could share all its findings with the Afghans. They briefly experimented with classified databases like the Combined Information Data Network Exchange (CIDNE) that was foundational to much of the trend analysis conducted by intelligence analysts in Afghanistan but found the process of moving this information to an unclassified network too onerous and slow.8

Open-source press reports acted as a gateway for historical attacks that could train the model. If the press reported an attack on a provincial or district center, it was likely significant enough for commercial sensors to notice. Press reporting from commercial databases proved foundational to identifying historic provincial and district center attacks. If an attack hit these databases, the team could go back and gather commercial imagery and social media posts and convert them to data to train the algorithm.

Commercial imagery included electro-optical (visible) and synthetic aperture radar. The satellites with higher refresh rates (how often images are captured and available) could better detect changes in activity. Social media reports came from popular platforms and group messaging applications. While social media seemed promising and occasionally contributed, its inconsistent quality and lack of precision made it less helpful than the imagery sources.

Making open-source intelligence data usable was foundational to Raven Sentry’s success. While data formatting is a normal challenge for AI experiments, the variety of formats from disparate commercial sources made this process even more difficult. Further, the analysts had to deconstruct many historical events and label individual parts for the machine to read them. Using reporting of historical attacks, these analysts could then go back several weeks from the event and focus on activities at associated locations (such as mosques, madrassas, insurgent routes, and known meeting places) where they could gather and format more indicators. Understanding insurgent tactics and techniques, including insights from Afghan partners, and limiting the geographic scope around district and provincial center attacks made this task manageable. In time, the engineers developed software that could translate open-source reporting into data the algorithm could read. Even so, data curation and adapting to new report formats was a continual process.

The team also created “influence data sets,” which included factors like weather and political instability that analysts knew were relevant based on templates of previous attacks. For example, attacks were more likely when the temperature was above 40 degrees Fahrenheit, lunar illumination was below 30 percent, and it was not raining. The algorithm used the influence data sets to increase or lower the attack risk, but these sets did not contain direct signatures of pending attacks.

Leaders of the innovation cell prioritized standardizing event details, such as codes for provinces and standard naming conventions for provincial and district centers. Analysts used the Military Grid Reference System (or MGRS) grid squares (one kilometer–by–one kilometer) as the base unit for location indicators for attacks (as demonstrated in figures 1 and 3), then focused data pulls on these regions, limiting the historical data analysts needed to break down. The data were curated manually into Excel spreadsheets and then data files for mapping applications (comma-separated values [CSV], Keyhole Markup Language [KLM]), which the engineers could then input into the system. Meanwhile, the engineers perfected the software that could process new commercial imagery or social media messages into data that fed the AI workbooks.

Figure 1. Warning named areas of interest (WNAIs)
(Source: “Artificial Intelligence Enabled Support to Afghanistan Warning” [PowerPoint presentation, NATO Special Operations Component Command—Afghanistan and Special Operations Joint Task Force – Afghanistan, Kabul, October 22, 2020])

Early on, the nerd locker team and Silicon Valley engineers had to curate much of the data manually. The analytical team in Afghanistan regularly led development meetings with stakeholders in Washington, US Central Command Headquarters, and Silicon Valley to make decisions on data standardization as new reports flowed into the system. Restricting data inputs to only unclassified sources facilitated the exchange between different entities involved in the curation. They exchanged files using DoD-SAFE (Secure Access File Exchange) and stored curated data in a DoD cloud service.

Once built, analysts and engineers trained the prototype Raven Sentry warning system using three unclassified databases of historical attacks, then set it to monitor 17 commercial unclassified geospatial data sources, OSINT reporting, and global information systems (GIS) data sets. Neutral, friendly, and enemy activity anomalies triggered a warning. For example, reports of political or Afghan military gatherings that might be terrorist targets would focus the system’s attention. The model learned to detect movement activity from one place to another along historic insurgent infiltration routes, which triggered warning signatures for a region. Likewise, actions of a local population anticipating an attack could trigger a warning. Usually, several anomalies, often combined with influence data sets, were necessary to push the risk above the warning threshold, as demonstrated in figures 2 and 3.

Figure 2. Warning thresholds
(Source: “Artificial Intelligence Enabled Support to Afghanistan Warning” [PowerPoint presentation, NATO Special Operations Component Command—Afghanistan and Special Operations Joint Task Force – Afghanistan, Kabul, October 22, 2020])

The AI warning agent continued to learn from real-world events to improve accuracy. Further, analysts improved the AI tool by identifying key warning inputs of insurgent aggression and highlighting them for the system— comparable to how a listener “likes” a song in the Pandora music application, triggering Pandora to feed the listener more music from that genre. The analysts and engineers constantly tuned the algorithm and curated the data to improve performance. The team could have moved the AI to a classified system and fed it information from more sensitive sources, but the system did not require secret reports to achieve good performance, and using classified information would have excluded the uncleared engineers and delayed sharing with Afghan partners.

In October 2020, analysts determined that Raven Sentry had reached approximately 70 percent accuracy and believed reports could add value to the analytical effort. The analysts monitoring the AI system’s results built weekly reports predicting windows of time when specific government centers were at increased risk. For example, Raven Sentry predicted insurgents would likely attack the Jalalabad provincial center between July 1 and July 12 (see figure 1). The report also predicted the number of casualties with a confidence level based on historic attacks with similar indicators. For example, the warning from July 1 to July 12 might predict 41 fatalities, with a 95 percent confidence interval of 27 to 55. The system would also highlight the grid square where the sensor detected abnormal activity. The designers called these grid squares warning named areas of interest (WNAIs) and, more precise locations, warning risk activity anomaly points (WRAAPs), as demonstrated in figures 1 and 3. The analysts who created the weekly reports then compared the results with other available intelligence to corroborate the model’s output.

Figure 3. Thresholds linked to warning named areas of interest
(Source: “Artificial Intelligence Enabled Support to Afghanistan Warning” [PowerPoint presentation, NATO Special Operations Component Command—Afghanistan and Special Operations Joint Task Force – Afghanistan, Kabul, October 22, 2020])

Along with the developers, intelligence officers would continually monitor the data health and review the results before distributing them. They tuned the model similar to cancer screenings designed to identify a wide array of possible incidents, even if that means accepting some false positives to cover all eventualities. They treated warning summaries as raw reporting intended to focus an analyst’s attention. The warning model said, “I have been trained to look for regions at risk for aggression, and you should check here.”

The AI-enabled model used old-school warning methodologies enhanced by new technology, making the analyst more efficient at processing indicators. Intelligence analysts deconstructed warning events for historical attacks on district and provincial centers to identify indicators of attacks, then taught the machine to identify these indicators independently and highlight the locations at risk. The AI model would learn over time and improve its predictions.

Once running, the system identified likely regions for insurgent attacks and assisted operators in focusing collection assets and strike platforms. The goal was to provide at least 48 hours of warning for insurgent attacks on district and provincial centers. During testing, the model demonstrated sensitivity and alignment to more than 41 insurgent aggression events in five historically violent provinces, providing more than 48 hours of warning in most cases. The model began operating full-time in October 2020. Although the war’s abrupt end in August 2021 ended the experiment, the lessons learned contributed to future analytical tools.

Lessons Learned

The Raven Sentry operational model, likely the first of its kind, increased analysts’ efficiency in predicting insurgent events. While the up-front cost was high, a well-tuned algorithm can significantly reduce the number of analysts required to overwatch enemy activity. In this case, the model could rapidly review terabytes of data and make warning predictions, increasing the analysts’ efficiency. Further, the team learned valuable lessons about developing and deploying artificial intelligence for military use. Among these lessons are the importance of command culture to successful innovation, techniques for building trust in AI models, and the feasibility of using only unclassified information from commercial systems to produce valuable intelligence, a lesson that foreshadowed the role of commercially produced, open-source intelligence in the Russia- Ukraine War.9

Raven Sentry demonstrated that an organizational culture committed to experimentation and tolerant of risk and failure is critical for successful innovation. Locating the nerd locker inside the special operations unit, where the culture was roughly analogous to a start-up business, proved crucial. Moving these analysts from other positions across the task force required sacrifice elsewhere in the intelligence mission. The uncertainty of the pending drawdown provided urgency that convinced leaders to assume risk in other missions to run the experiment. Throughout this process, entities inside the national intelligence community and DoD bureaucracy objected to investing large sums of money to employ the Silicon Valley engineering team for an experimental military project. Moving the funding forward took multiple briefings, phone calls, and senior leader interventions that could only have happened in an organization committed to innovation.

Military leaders must trust the system to employ AI models successfully in combat. Developing Raven Sentry revealed several methods to build that trust. First, military personnel must know enough about data, machine learning, and AI to provide focus to commercial engineers involved in development. Further, military analysts must have the communication skills to explain the system’s outputs to operators and leaders. Pulling shifts on the operations floor helped Raven Sentry’s developers understand mission requirements and build relationships with the operators responsible for directing reconnaissance platforms against Raven Sentry’s predictions and possibly ordering combat missions. Trust in the people running the system led to trust in the system’s output.

This experiment validated that commercially produced, unclassified information can yield predictive intelligence, which is helpful when working closely with foreign partners and the commercial sector. The Raven Sentry team used databases of unclassified news reporting to train the algorithm on attacks likely covered by commercial satellites and would generate social media posts. Analysts refined attack templates by working closely with embedded Afghan partners who had better awareness of local customs and often better knowledge of the enemy. Afghan partners identified indicators the US analysts could not recognize. Further, by limiting data inputs to unclassified, commercially produced information (in this example, imagery, press reporting, and social media), Raven Sentry produced intelligence in a format shareable with Afghan partners and the commercial sector. Finally, building and employing AI-based methods takes a team of engineers and operational analysts—neither could have developed these systems alone. The engineering team connected Raven Sentry to the latest algorithms emerging from academia and business, but the engineers were not cleared to access classified information. Relying exclusively on open-source data was critical to Raven Sentry’s success.

The final lesson involves the maintenance of AI models, which is important for leaders who allocate resources to this type of technology to understand. The upkeep and improvement of an AI model is a continual process that requires dedicated personnel and time. As the environment evolves in combat and competition, sources of information emerge and change, and it takes analysts and engineers to recognize changes and update the algorithm and data inputs continually. AI models are not fire-and-forget: the military cannot purchase an AI algorithm and expect it to work without constant maintenance.

A Word of Caution

As with all AI systems, there is a delicate balance between the desire for efficiency and maintaining human oversight. In this narrow case, human-machine teaming worked best. Raven Sentry made the analysts more efficient but could not replace them. As the speed of warfare increases and adversaries adopt AI, the US military may be forced to move to an on-the-loop position, monitoring and checking outputs but allowing the machine to make predictions and perhaps order action.

Regardless of the level of supervision, humans must be aware of AI’s weaknesses. There are numerous commercial and military examples of AI systems making mistakes. Several studies have found that facial recognition software is less effective on people with darker skin color. GPS employing AI to direct vehicles occasionally provides routes that do not account for emerging traffic or weather, and self-driving vehicles have caused fatalities. Especially in the early testing phases, Raven Sentry’s predictions were hard to understand and occasionally wrong. If properly employed, however, AI will reduce human error. Still, operators must understand the weaknesses and remain involved enough to detect errors.10

As Raven Sentry improved, the system's analysts had to be aware of automation bias. As they become accustomed to using an AI system, humans may stop critically examining a system’s outputs and blindly trust it, especially in time-sensitive situations common in combat. An investigation of Patriot missile friendly-fire incidents in 2003 found that operators were trained to trust the auto-fire software, which would be necessary during high-volume missile attacks but contributed to misfires on their own aircraft and were unnecessary for low-volume incidents. The same effect exists in the medical field. Medical researchers ran several experiments that found radiologists using AI were biased toward the AI’s recommendations—which were intentionally incorrect for the experiments—and often produced incorrect diagnoses.11

Raven Sentry’s creators were aware of the system’s weaknesses, especially in its nascent form and thus treated results as just one input requiring corroboration from traditional intelligence disciplines, such as classified imagery or signals intelligence. They also experienced difficulties as new analysts rotated into Afghanistan and educated them deliberately on Raven Sentry’s vulnerabilities so the new personnel would not blindly trust outputs. Basing decisions on multiple sources remains paramount to military intelligence, and an AI-produced report should be cross-checked whenever possible.

For all these reasons, leaders employing artificial intelligence must understand essential system functions. Since the innovation team developed Raven Sentry in a unit engaged in active combat, most of its leaders learned about the system as it developed. In peacetime, or as personnel rotate, growing an AI system alongside the leaders employing it might not be possible. Military leaders and analysts should train on how these tools work to understand their limitations and should read case studies of past successes and failures to mitigate this learning curve. Finally, they must remember that war is ultimately human, and the adversary will adapt to the most advanced technology, often with simple, common-sense solutions. Just as Iraqi insurgents learned that burning tires in the streets degraded US aircraft optics or as Vietnamese guerrillas dug tunnels to avoid overhead observation, America’s adversaries will learn to trick AI systems and corrupt data inputs. The Taliban, after all, prevailed against the United States and NATO’s advanced technology in Afghanistan.


The Resolute Support team took advantage of a culture open to innovation, the urgency created by the drawdown, and a unique set of resident capabilities and contracted skills to experiment with promising technology—but this progress was only the beginning. Further Army studies on intelligence processing and speeding the sensor-to-shooter loop have built upon this initial experiment. Advances in generative AI and large language models are increasing AI capabilities, and the ongoing wars in Ukraine and the Middle East demonstrate new advances. To remain competitive, the Joint Force must educate its leaders on AI, balance the tension between computer speed and human intuition, and create ecosystems within their organizations to enable this technology.12


Acknowledgments: I would like to thank Lieutenant General Robert Ashley (US Army, retired) and the Resolute Support commanders and intelligence leaders involved in this experiment. Thank you especially to the selfless analysts interviewed for this article who did the hard work developing Raven Sentry.


Thomas W. Spahr
Colonel Thomas Spahr (US Army), PhD, is the chair of the Department of Military Strategy, Planning, and Operations at the US Army War College. His research expertise is in military history, intelligence, and the military application of artificial intelligence. He was the chief of staff of the Resolute Support J2 (intelligence) in Afghanistan from July 2019 to July 2020.


  1. Quote from A. J. P. Taylor, The First World War: An Illustrated History (New York: Putnam, 1964), 9. Return to text.
  2. Thomas Spahr, “Adapting Intelligence to the New Afghanistan,” War on the Rocks (website), September 30, 2021, https://warontherocks.com/2021/09/adapting-intelligence-to-the-new-afghanistan; and Department of Defense (DoD), Enhancing Security and Stability in Afghanistan (Washington, DC: DoD,  June  2020, 18, https://media.defense.gov/2020/Jul/01/2002348001/-1/-1/1/ENHANCING_SECURITY_AND_STABILITY_IN_AFGHANISTAN.PDF. Return to text.
  3. Quote from Mark A. Milley, “Strategic Inflection Point,” Joint Forces Quarterly 110, no. 3 (2023): 8, https://ndupress.ndu.edu/JFQ/Joint-Force-Quarterly-110/Article/article/3447159/strategic-inflection-point-the-most-historically-significant-and-fundamental-ch/. Return to text.
  4. Quote from DoD, Data, Analytics, and Artificial Intelligence Adoption Strategy: Accelerating Decision Advantage (Washington, DC: June 27, 2023), 5, https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF. Return to text.
  5. Marcus Weisgerber, “The Pentagon’s New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS,” Defense One (website), May 14, 2017, https://www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833. Return to text.
  6. Lindsey R. Sheppard et al., Artificial Intelligence and National Security: The Importance of the AI Ecosystem (Washington, DC: Center for Strategic & International Studies, November 2018), 6, https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/181102_AI_interior.pdf. Return to text.
  7. Quote from Cortney Weinbaum and John N. T. Shanahan, “Intelligence in a Data-Driven Age,” Joint Forces Quarterly 90, no. 3 (2018), 5, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-90/jfq-90_4-9_Weinbaum-Shanahan.pdf. Return to text.
  8. Brandie Woodard,“Data Exchange Becomes ‘Go-To’ for Theater Information,” Wright-Patterson AFB (website), April 2, 2012, https://www.wpafb.af.mil/News/Article-Display/Article/399614/data-exchange-becomes-go-to-software-for-theater-information/. Return to text.
  9. Jim Hockenhull, “Speech: How Open-Source Intelligence Has Shaped the Russia-Ukraine War,” UK.GOV (website), December 9, 2022, https://www.gov.uk/government/speeches/how-open-source-intelligence-has-shaped-the-russia-ukraine-war. Return to text.
  10. “Racial Bias in Facial Recognition,” Amnesty International (website), March 21, 2023, https:// web.archive.org/web/20230629152319/https://www.amnesty.ca/surveillance/racial-bias-in-facial-recognition-algorithms/; Faiz Siddiqui and Jeremy B. Merrill, “17 Fatalities, 736 Crashes: The Shocking Toll of Tesla’s Autopilot,” Washington Post (website), June 10, 2023, https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk; and Ruben Stewart and Georgia Hinds, “Algorithms of War: The Use of Artificial Intelligence in Decision Making in Armed Conf lict,” Humanitarian Law & Policy (blog), International Committee of the Red Cross (website), October 24, 2023, https://blogs.icrc.org/law-and-policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-conflict. Return to text.
  11. DoD, “Report of the Defense Science Board Task Force on Patriot System Performance: Report Summary,” (Washington, DC:  DoD, January 2005), 2,  https://dsb.cto.mil/reports/2000s/ADA435837.pdf; and Thomas Dratsch et al., “Automation Bias in Mammography: The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance,” Radiology 307, no. 4 (May 2023), under “Original Research: Computer Applications,” https://pubs.rsna.org/doi/epdf/10.1148/radiol.222176. Return to text.
  12. Samuel Bendett, “Roles and Implications of AI in the Russian-Ukrainian Conflict,” Center for a New American Security (website), July 20, 2023, https://www.cnas.org/publications/commentary/roles-and-implications-of-ai-in-the-russian-ukrainian-conflict. Return to text.


Disclaimer: Articles, reviews and replies, and book reviews published in Parameters are unofficial expressions of opinion. The views and opinions expressed in Parameters are those of the authors and are not necessarily those of the Department of Defense, the Department of the Army, the US Army War College, or any other agency of the US government. The appearance of external hyperlinks does not constitute endorsement by the Department of Defense of the linked websites or the information, products, or services contained therein. The Department of Defense does not exercise any editorial, security, or other control over the information you may find at these locations.