Psychology of Intelligence Analysis

The following are my notes from Psychology of Intelligence Analysis by Richards J. Heuer, Jr.:

Part I - Our Mental Machinery

  • Perceptions are quick to form, but difficult to change.
    • When bringing a fuzzy picture into focus, subjects who started with picture most blurred had to have the picture most focussed before being able to recognise it.
  • A good analysis system should have the following characteristics:
    • Encourages results that outline not only conclusions, but also assumptions, dependencies ("chains of inference"), and the degrees of certainty thereof
    • Is friendly to re-examination from basics
    • Uses procedures that force the analyst to consider and elaborate multiple points of view
    • Educates consumers on the strengths/weaknesses of the result
  • Short term memory (STM) is finite, where Long term memory (LTM) is effectively limitless
  • STM only retains the interpretation of sensory memory
  • LTM loses details that were present in STM, it is here that selective perception can occur
  • In one model of LTM, the ease of retrieval of a particular memory is based partly on the strength of the pathways to that memory. A pathway is strengthened by visiting it.
  • A schema is a set of memories where the connections between the memories is so strong that the entire group is retrieved essentially as one. This is similar to the 'script' concept from the Philosophy of Artificial Intelligence. Schemata exert a powerful influence over forming perceptions.
  • Schemata allow chess masters to far more accurately remember the positions of chess pieces on a chess board than a non-chess player if the positions are taken from an actual game rather than being random.
  • Transferal from STM to LTM depends on how closely the information relates to already existing schemata and the depth of processing on the information while in STM, rather than on repetition.
  • Judgment = Available Information + Background/Contextual Information in Memory + Schemata that pertain to the Available Information
  • Because we can only keep 7(+/- 2) things in working memory at one time, we need to use external memory aids to accurately grasp complex problems..
  • Another mechanism is an model of the problem which starts as a mnemonic device but which quickly becomes a schema of its own allowing the user to synthesize new information.
  • Because the mind will not perceive and/or store information for which no appropriate category exists, it can be blinded when its categories are not drawn correctly initially. This is an analytical weakness known as "hardening of the categories".
  • Factors that influence how well a fact is remembered:
    • being the first-stored fact on a given topic
    • the amount of attention I focus on the fact
    • the credibility of the fact
    • the importance I attribute to the fact at the time of storage

Part II - Tools for Thinking

Strategies for Deriving Hypotheses

  • Situational logic treats each each situation as unique so that it must be understood in terms of its own unique logic, staring from the known facts and developing a plausible narrative. Its strength is that it can work in any situation. Its weaknesses is that it fails to exploit theoretical knowledge.
  • Applying theory tries to match the current set of conditions against the set of assumptions of a known theory, and if the conditions match all the assumptions, it assumes that the outcomes predicted by the theory will be true in the current situation. It allows you to forecast events for which the hard evidence has yet to develop. Weakness include the lack of ability to infer time-frames for the predicted outcomes, and the fact that applying a theory may block one's view of seeing hard evidence to the contrary if such hard evidence were to develop.
  • Applying analogy tries to find one or more corresponding situations to fill in the gaps in understanding the current situation. It is useful when neither data nor theory are present (precluding the use of the previous two tools). Its strength is that it reduces the unfamiliar to the familiar. Its weaknesses are that it depends on how close the current situation actually is to the analogous one, and the trend is to make decisions that would have been good for the analogous situation, not current one. This can be mitigated by do an in-depth analysis of both the current and the analogous situations. Also, one tends to seize upon the first analogy that springs to mind, which can be mitigated by possessing a wide range of potential analogues. Its best use is to suggest hypotheses and to highlight differences rather than to draw conclusions.
  • Data immersion tries to gather the data without fitting it into any preconceived pattern and waiting for an apparent pattern to emerge. This is a misnomer since the significance of each piece of data is only determined by one's assumptions and expectations. Data immersion is really for absorbtion, not analysis. Since assumptions are inevitable, the strength of the analysis is determined by the expression of the assumptions rather than leaving them implicit. "Objectivity is gained by making assumptions explicit so that they may be examined and challenged, not by vain efforts to eliminate them from analysis."

Strategies for Choosing From Among Hypotheses

  • Satisficing takes the first hypothesis that is "good enough". Weaknesses:
    • Selective perception: Satisficing requires the analyst to sort data based on whether or not is supports the hypothesis. The natural bias is to see what one is looking for, overlooking other data. If the hypothesis turns out to be incorrect, info may be lost that would lead to other hypotheses.
    • Failure to Generate Appropriate Hypotheses
    • Failure to Consider Diagnosticity of Evidence: Diagnosticity is the "extent to which any item of evidence helps the analyst determine the relative likelihood of alternative hypotheses." Evidence that fits all hypotheses it has low diagnosticity. Evidence that helps pick from among hypotheses has high diagnosticity.
  • Incrementalism
  • Consensus take the hypothesis that will make the most people happy
  • Reasoning by analogy chooses the hypothesis that is most likely to either avoid a previous mistake or repeat a previous success.
  • Relying on a set of principles to distinguish between good and bad hypotheses.
  • Simply passing the alternatives up the ladder.

Failure to Reject Hypotheses

  • Seeking to prove an hypothesis will tend to prove it; seeking to discredit it will tend to discredit it.

Do You Really Need More Data?

  • Studies show that adding additional information tends to increase an analyst's confidence in their conclusion, but not their accuracy, and that focusing on hypotheses and testing those hypotheses tends to allow analysts perform better in such studies.
  • Analysts are largely unaware of which data they actually use to make judgments, nor are they aware of which data is actually most important in making judgments.
  • Categories of new information:
    • Additional details about variables already included in the analysis: tends to increase analyst's confidence
    • Identification of additional variables not included in the analysis so far: may chang
    • Information concerning the value attributed to variables already included in the analysis: may change to conclusion, but not the analyst's mental model
    • Information concerning which variables are most important and how they relate to each other: this kind of information changes the analyst's mental model
  • Types of analysis:
    • Data driven analysis
      • The mental model (which variables are important) is broadly agreed upon
      • Each element of the model is explicit so that other analysts can be taught to follow the same procedures to arrive at similar conclusions
      • The accuracy of the conclusion depends entirely on the accuracy and completeness of the the available data
    • Conceptually Driven Analysis
      • No existing, agree upon model exists; the relationships among the potentially relevant variables are complex and imperfectly understood
      • The accuracy of the conclusion depends primarily on the accuracy of the analyst's model
      • The ability to improve the model depends on systematic feedback on the accuracy of previous judgments, however, even a correct conclusion may not indicate a correct model.

Mental Ruts

  • One strategy for breaking out of a rut is to talk the problem through out loud

Tools for questioning assumptions

  • Sensitivity analysis: Am I using a broad enough set of inputs? How sensitive is the final result to changes in any major variables? For example, if a change in a single input swings the decision one way or the other (assuming binary output), the model really is only using one variable.
  • Identify alternative models: Seek out individuals who disagree or hold a different set of assumptions.
  • Be wary of mirror images: Results based on "if I was in their shoes" type thinking are prone to errors.

Tools for seeing different perspectives

  • Thinking backwards: Pretend that some event you do not expect has actually occurred. Put yourself in the future and look back at what web of events could have led to that event happening.
  • Crystal ball: Imagine a "perfect" intelligence source has told you a certain assumption is wrong. Develop a scenario to explain how this could be true. If a plausible scenario exists, then the assumption may not be a solid as previously thought.
  • Role playing: More than just trying to imagine how another person may react; "living" the role breaks an analyst's normal mind set.
  • Devil's advocate: Assign someone to advocate a minority point of view as strenuously as possible.

Recognizing when to change your mind

  • Learning from surprise: When a fact does not fit my prior understanding, take note and inquire into it. Does it support some alternate hypothesis?

Stimulating creative thinking

  • Deferred judgment: Creativity and evaluation do not mix well. Defer evaluation until after idea generation.
  • Quantity leads to quality: The first ideas that come to mind will be the usual ones. Generating a larger quantity of ideas means that some creative ideas will surface.
  • No self-imposed constraints: Encourage thinking to range as freely as possible.
  • Cross-fertilization of ideas: Ideas combined together tend to form more and even better ideas.

Impact of organizational environment on creativity

  • A study on creativity showed that it was not the creativity or intelligence of individual researchers, nor the presence of any particular organization factor, that produced created work, but rather the combination of all the factors unleashed creativity in the organization:
    • Researcher perceived himself or herself as responsible for initiating new activities (ie. the opportunity and freedom to innovate)
    • Researcher had considerable control over decision making concerning his or her project
    • Researcher feels secure in his or her role to advance new ideas
    • Administrators stayed out of the way and acted as support
    • The project was relatively small with respect to the number of people involved (smallness promotes flexibility)
    • Researchers engaged in other activities such as teaching or administration

Two main tools for structuring analysis problems

* Decomposition: breaking down a problem into its component parts
* Externalization: getting the problem out of the mind and into some visible form that can be worked with

Step-by-Step Outline of Analysis of Competing Hypotheses

  1. Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.
    • This is the idea generation stage
    • Be careful not eliminated unproven hypotheses by considering them disproved
  2. Make a list of significant evidence and arguments for and against each hypothesis.
    • Start with the list of general evidence that applies to all hypotheses
    • Then, for each hypothesis, ask, "If this hypothesis is true, what should I expect to be seeing or not seeing?"
  3. Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the "diagnosticity" of the evidence and arguments—that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.
    • Work across a piece of evidence, not down an hypothesis, asking for each hypothesis, is this evidence consistent or inconsistent with this hypothesis (or perhaps how consistent or inconsistent)
    • The diagnosticity of a piece of evidence is its ability to rule out hypotheses
  4. Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.
    • Drop rows that have low diagnosticity from the matrix, saving them to show they were considered.
    • Are there two hypotheses that the current evidence cannot distinguish between? Should they be combined? Are there hypotheses that are missing? Do the existing hypotheses need to be refined?
  5. Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.
    • Now work down the columns, let the hypotheses compete for your favour.
    • It only takes one piece of evidence to disprove a hypothesis
    • The hypothesis with the fewest pieces of evidence against it (or the lowest weighted pieces of evidence) is likely the correct one
    • If you disagree with the "most correct" hypothesis it is because you omitted from the matrix one or more factors that have an important influence on your thinking (or the weights do not represent your true thinking), adjust the matrix appropriately
    • The matrix does not make the decision, you do; the matrix reflects your judgment of what is important and how these important factors relate to the probability of each hypothesis
    • If there is a disagreement about conclusions now or after publishing the results, this matrix can be used to pinpoint the area of disagreement and the subsequent discussion can then focus productively on the ultimate source of the differences.
  6. Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
    • What if one of the key pieces of evidence was wrong? Which hypothesis would be the most likely then? It may appear unlikely, but what scenarios could be imagined in which the evidence is in fact incorrect? Can these pieces of evidence be verified.
    • When filing the conclusions, also list the key assumptions that the conclusion is based on, noting that the outcome depends on the assumptions.
  7. Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.
    • This allows decision makers to develop contingency plans if one of the less likely hypotheses proves true.
  8. Identify milestones for future observation that may indicate events are taking a different course than expected.
    • Detail what new information or change in situation would cause you to change the relative probabilities of the hypotheses.

Part III - Cognitive Biases

  • Awareness of biases does not lessen their impact

Biases in the Evaluation of Evidence

  • The Vividness Criterion
    • Information received directly is likely to have a greater impact than secondhand information that may have greater evidential value.
    • Case histories and anecdotes will have a greater impact than abstract but aggregate or statistical data. (Eg. "Despite all the evidence linking smoking to lung cancer, I know a man who smoke 3 packs a day and lived to 99.")
  • Absence of Evidence
    • Even experts in a knowledge area have trouble recognizing and incorporating into their judgments the fact that not all causes are necessarily present and obvious.
  • Oversensitivity to Consistency
    • Making decision bases on consistent data is the logical choice in many cases, however, if the data is highly correlated or redundant, many related reports may be not more informative than a single report.
    • Intuitively, we make the mistake of treating small samples as though they were large ones. Most people don't have an intuitive sense of how large a sample has to be to be considered representative.
    • If only small amounts of data are available, conclusions must be drawn, but the confidence of the conclusion must be low regardless of the consistency.
  • Coping with Evidence of Uncertain Accuracy
    • Analysts have difficulty managing the likelihood of outcomes concluded from uncertain information. A 75% likely source gives information X that Y will happen if Z happens: if Z seems 75% likely, then Y is only 56% likely.
    • However, not all sources can be so neatly categorized. Eg. if a source's motivation cannot be determined, the evidence has to stand on its own merit.
  • Persistence of Impressions Based on Discredited Evidence
    • Once linkages have been formed from information and used in mental schemata, even if that information is discredited, the conclusions and impression and linkages remain.
    • The effect can be so strong as to doubt the report that the information has been discredited.

Biases in Perception of Cause and Effect

  • "When we observe one billiard ball striking another and then watch the previously stationary ball begin to move, we are not perceiving cause and effect. The conclusion that one ball caused the other to move results only from a complex process of inference, not from direct sensory perception. That inference is based on the juxtaposition of events in time and space plus some theory or logical explanation as to why this happens."
  • "Recognizing that the historical or narrative mode of analysis involves telling a coherent story helps explain the many disagreements among analysts, inasmuch as coherence is a subjective concept. It assumes some prior beliefs or mental model about what goes with what. More relevant to this discussion, the use of coherence rather than scientific observation as the criterion for judging truth leads to biases that presumably influence all analysts to some degree… If analysts tend to favor certain types of explanations as more coherent than others, they will be biased in favor of those explanations."
  • Bias in Favour of Causal Explanations:
    • "one can find an apparent pattern in almost any set of data or create a coherent narrative from any set of events."
    • "When experimental results deviated from expectations, [a group of psychologists being studies] rarely attributed the deviation to variance in the sample. They were always able to come up with a more persuasive causal explanation for the discrepancy."
  • Bias Favouring Perception of Centralized Direction
    • "most people are slow to perceive accidents, unintended consequences, coincidences, and small causes leading to large effects. Instead, coordinated actions, plans and conspiracies are seen."
    • The effect of the bias is incorrect conclusions about an organization's actions, and overestimating the value of isolated events in determining an organization's policy.
  • Similarity of Cause and Effect
    • The bias that prefers to believe that large consequences could only have arisen from large events and that large events must have large consequences.
  • Internal vs. External Causes of Behaviour
    • The bias that prefers to attribute negative behaviour in others to their internal disposition while preferring to attribute positive behaviour in others to external factors.
    • The opposite is true as well; we prefer to attribute positive behaviour in ourselves (or "our side") to our internal disposition while preferring to attribute negative behaviour in ourselves to external factors.
  • Overestimating Our Own Importance
    • "Many surveys and laboratory experiments have shown that people generally perceive their own actions as the cause of their successes but not of their failures." We instead attribute failure to achieve our goals to external, uncontrollable, or unforeseen causes.
    • "People sometimes fail to recognize that actions that seem directed at them are actually the unintended consequence of decisions made for other reasons."
  • Illusory Correlation

Biases in Estimating Probabilities

  • Availability Rule
    • Rule of thumb where one gauges the likelihood of an event by how often or how frequently similar events occur. Similar events which are vivid or personal take on greater weight.
    • Also can be how many scenarios can be imagined which would lead to the event (again vividness lends greater weight).
  • Anchoring
    • "Typically, however, [whatever figure one starts with] serves as an anchor or drag that reduces the amount of adjustment, so the final estimate remains closer to the starting point than it ought to be."
  • Expression of Uncertainty
    • Using ambiguous terms like "probably" or "(un)likely" or even "extremely (un)likely" will tend to cause the reader to fit the fact into his/her preconceived opinion about the likelihood of an event. Using actual percentages or ranges of percentages decreases this effect.
  • Assessing Probability of a Scenario
    • People tend to base the likelihood of a scenario on its detail or vividness, or to take the average probability of the events in the scenario.
    • The probability of a scenario is the mathematical product of the probabilities of each event in the scenario.
  • Base-Rate Fallacy
    • Failure to take into account existing data (eg. known probabilities) when presented with data regarding the case at hand.

Hindsight Biases in Evaluation of Intelligence Reporting

Hindsight biases are not the result of lack of objectivity or self-interest, but are real biases that experimentation shows cannot easily be compensated for even when aware of them.

  • The Analyst's Perspective
    • Analysts tend to be less surprised by events after the fact than they should have been given their estimates prior to the event.
  • The Consumer's Perspective
    • Consumers tend to in hindsight believe that they already knew new information given in an analyst's report.
  • The Overseer's Perspective
    • Judging the likelihood of possible outcomes after the outcome is known results in an inflated perception of the likelihood of the event that actually did occur by as much as double.

Part IV - Conclustions

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License