Communicating Uncertainty in Intelligence Analysis
Steven Rieber, (Office of the Director of National Intelligence), email@example.com
The current approach to conveying uncertainty in intelligence analysis is to qualify judgments with verbal expressions such as probably, unlikely, may and could, among others. No common standards govern the use of these terms within the Intelligence Community (IC). Although this approach gives analysts and managers maximal latitude, it risks serious and frequent miscommunication. Numerous studies have demonstrated wide variations in how people understand these terms. These diverse understandings mean that every time one of the terms is used, intelligence consumers might interpret it differently from how it was intended.
The potential miscommunication is not limited to interactions between analysts and consumers. When intelligence products are coordinated, analysts, offices, and agencies agree to common language. However, if the language itself is ambiguous in ways that are not evident to the coordinating parties, an illusion of agreement may mask very different understandings. In addition, intelligence products are typically read by several policy-makers, who may interpret terms such as possible differently from one another. Consequently ambiguity in these terms can also lead to misunderstandings between the policy-makers themselves.
The problem is not new. Sherman Kent recalled writing a 1951 National Intelligence Estimate (NIE) which stated that a Soviet attack on Yugoslavia should be considered a serious possibility. Shortly after the NIE was disseminated, the chairman of the State Department's Policy Planning Staff asked Kent what he meant by serious possibility. Kent writes:
I told him that my personal estimate was on the dark side, namely that the odds were around 65 to 35 in favor of an attack. He was somewhat jolted by this; he and his colleagues had read serious possibility to mean odds very considerably lower. Understandably troubled by this want of communication, I began asking my own colleagues on the Board of National Estimates what odds they had had in mind when they agreed to that wording. It was another jolt to find that each Board member had had somewhat different odds in mind and the low man was thinking of about 20 to 80, the high of 80 to 20. The rest ranged in between.
This type of miscommunication may be far more common than generally believed. Rarely do analysts follow Kent's example in asking consumers and other analysts how they understand probabilistic terms. Consequently, no one knows how often or how severely such terms are misinterpreted in practice.
Kent's proposed remedy was to standardize the meanings of a small number of qualitative expressions almost certain, probable, chances about even, probably not, Andi almost certainly not by stipulating that each should express a particular range of probabilities. For example, probable was to mean 75%, give or take about 12%. Despite sporadic attempts at standardization, Kent's proposal was never widely adopted, and the dilemma of how to minimize or eliminate miscommunication in intelligence products remains unsolved.
Recently the issue has gained renewed attention. One recommendation of the WMD Commission Report included: A structured Community program must be developed to teach rigorous tradecraft and to inculcate common standards for analysis so that, for instance, it means the same thing when two agencies say they assess something with a high degree of certainty. In addition, the 2004 Intelligence Reform and Terrorism Prevention Act states that the Director of National Intelligence shall review intelligence products to ensure, among other things, that they properly caveat and express uncertainties or confidence in analytic judgments.
Kent's proposal to use numerical probabilities with fairly wide ranges (for example, 75% plus or minus 12%) shows that there is a difference between ambiguity and lack of precision; consequently eliminating ambiguity does not require adopting a precise vocabulary. Kent's proposal was an attempt to eliminate the ambiguity inherent in the ordinary use of probabilistic terms without thereby introducing unrealistic precision. His method is one of several options that will be considered in this paper. But first the research findings will be reported.
Prior research has found that:
o Some probabilistic terms are much vaguer than others.
o People use a wide range of terms.
o People have different understandings of what the terms mean.
o Between-subject differences are larger when the terms occur in context.
o Audiences understand the terms differently from speakers.
o People understand the terms consistently over time.