Quantitative Methods in Defense and National Security 2007

Performance Assessment of ISR Enterprises
Michael B. Hurley, (MIT Lincoln Laboratory), hurley@ll.mit.edu, and
Peter Jones, (MIT Lincoln Laboratory), jonep@ll.mit.edu


The Joint Services of the Department of Defense (DoD) are in the process of transitioning legacy computing applications for multi-intelligence (multi-INT) intelligence, surveillance, and reconnaissance (ISR) tasks to standards-based information enterprises. This process is occurring with very little theoretical or practical understanding of the appropriate figures of merit needed to assess the performance of these large information systems. The danger of developing systems without clearly stated and understood figures of merit is that important, but difficult to measure, system characteristics will be ignored in favor of less important, but easier to measure, system characteristics. The result will be that the performance analyzes will be at best inaccurate or at worst misleading. Operational users, system developers and program managers who rely on such performance analysis will assume considerable risk regarding their decisions on how best to use, improve and deploy these systems.

A study was conducted to develop an end-to-end assessment framework that identifies the fundamental figures of merit for multi-INT ISR enterprises to significantly reduce this risk. The study was divided into three phases: a search of the technical literature to review the state of the art for distributed enterprise frameworks similar to the ISR enterprise, the development of a conceptual framework with an analytical foundation to support performance assessment, and the construction of a simulation to demonstrate the analytical assessment of a simple multi-INT ISR enterprise.

The literature search uncovered many conceptual models and frameworks analogous to the ISR enterprise, including the Office of Force Transformations Network Centric Operations Conceptual Framework, Endsleys situation assessment model, Boyds OODA model, and Moffetts book on network centric warfare. A few models identified quantitative metrics to assess systems, but none had an analytical foundation that was tightly integrated into the conceptual framework to define fundamental figures of merit.

To fill this need, a conceptual model with analytical foundations was developed, with the results of the literature search strongly influencing the model development. The conceptual model was developed as a sequence of three inter-related models. The first model is a black-box decision system in an environment that it attempts to modify to its advantage. The second model contains details about the black-box decision system, which consists of a sensor, a series of three information processors, and an actuator. The chain of information processors converts sensor measurements to features, then to an estimate of the state of the environment, and finally to a decision that commands the actuator to change the environment. The final model is a set of interacting decision systems that are attempting to change the world to their individual and collective advantage. The organization of the components in this final model is designed such that each individual component adopts one of the roles in the decision system of the second-level model. This final model has sufficient detail to describe the ISR enterprise.

Analytical foundations were evaluated and selected as model development progressed from a decision system that was little more than a black box to one that is a distributed collection of decision systems. Utility theory, probability theory, and information theory were ultimately selected as the analytical foundations for the series of models. The series of models with their analytical foundation led to an important insight into the ISR enterprise: the ISR enterprise can only be fully evaluated if the sensors, command and control (C2) enterprise, actuators, and environment are included in the evaluation.

Three measures were identified by the study as being sufficient to analyze the ISR enterprise: Shannon information, the Kullback-Leibler distance, and a probability integral function. Shannon information measures the uncertainty in the information that a decision system possesses, the Kullback-Leibler distance measures the similarity between the information in two different decision systems (or components of a distributed decision system), and the probabilistic integral measures the accuracy of information that a decision system possesses when the truth is available for the evaluation. The fundamental value of these metrics is that these measures consolidate the contribution of disparate components to the ISR enterprise, including communications systems, processors, algorithms, and organizational structure, primarily by their impact on the quality of information that the enterprise can collect.

To demonstrate the application of these measures to a multi-INT ISR enterprise, a simple simulation was constructed of an enterprise that is formulating a common operational picture (COP). The simulation examined the impact that different communications architectures had on the quality of the COP by applying the decided upon metrics to the simulated enterprise. The environment for the simulation was a small five-by-five cell world with each cell either empty or containing one of three different target classes: a square, circle or triangle. The enterprise consists of five sensors with different capabilities to detect targets, ranging from a synoptic sensor that can sense the presence or absence of targets in all cells simultaneously, to myopic sensors that can only sense the absence or presence of one type of target in one cell at a time. The sensors move through the world to improve their knowledge about the world. The simulation was run with different communications architectures, including no communications, unlimited communications, push, blind pull, and informed pull. The three information measures were calculated as a function of time for multiple runs of the five communications architectures and used to quantify the performance of the different architectures. The results agree with what communications experts would predict for relative performance: the best is unlimited communication, the worst is no communication, informed pull is better than push and blind pull, despite the partial allocation of overall bandwidth to metadata transmission to support informed pull.

[This work was supported by the Department of the Navy, Office of Naval Research (ONR) under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Navy or the United States Air Force.]

Take me back to the main conference page.