Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Strategic Decision Making

The human brain is spectacularly bad at strategic reasoning, and we should be honest about why. It is not a matter of intelligence. Brilliant people make terrible strategic decisions routinely. The problem is structural: strategic decisions require you to reason about futures you cannot observe, account for competitors whose intentions you cannot read, question investments you have already made, and consider possibilities that threaten your identity, your career, or your organization’s self-image. Your brain was not designed for any of this. It was designed to keep you alive on a savanna where the relevant time horizon was about fifteen minutes.

Every cognitive bias catalogued in the literature shows up at the strategic level, but with amplified consequences: confirmation bias makes you seek evidence for the strategy you have already chosen, sunk cost fallacy makes you cling to failing initiatives, anchoring makes you negotiate from arbitrary starting points, availability bias makes you over-index on recent events, and the planning fallacy makes your timelines fictional. These are not bugs in otherwise rational actors. They are the default operating mode of human cognition applied to problems it was never evolved to handle.

This chapter is about using the techniques from Part III to mitigate — not eliminate, mitigate — these structural disadvantages. AI is useful for strategic thinking not because it is strategically brilliant (it is not) but because it is differently broken. It does not have career anxiety. It does not have sunk cost attachment. It does not care about organizational politics. It does not flinch from unpleasant conclusions. These are precisely the failure modes that make human strategic reasoning unreliable, and their absence in AI makes it a useful counterweight to your own cognitive weaknesses — provided you know how to use it.

Pre-Mortem Analysis: The AI Coroner

The pre-mortem is a well-established technique: before making a decision, you imagine that the decision has already been made and has failed catastrophically, and then you write the story of why it failed. Gary Klein developed the technique in the 1980s, and it works because it gives people psychological permission to voice concerns they would suppress in a normal planning discussion. Saying “this might fail because…” feels disloyal. Saying “in the scenario where this failed, the cause was…” is just analysis.

AI supercharges the pre-mortem because it has no loyalty to suppress. It will write the failure story with genuine enthusiasm, exploring failure modes that the people in the room cannot afford to articulate.

It is one year from now. The decision to [specific decision] has failed catastrophically. The board/leadership/team is conducting a post-mortem. Write that post-mortem report. Include:

  1. The specific chain of events that led to failure
  2. The warning signs that were visible in retrospect but ignored at the time
  3. The assumptions that turned out to be wrong
  4. The alternative decisions that, in retrospect, would have been better
  5. The organizational or cognitive factors that led the team to make this decision despite the risks

Be brutally specific. Use concrete scenarios, not vague generalities. Do not hedge. Write this as a team that is genuinely trying to understand what went wrong.

The fifth point — the organizational and cognitive factors — is where this technique diverges most sharply from a standard risk assessment. A risk assessment lists things that might go wrong. A pre-mortem post-mortem explains why the team did not see it coming, and that explanation is almost always about human factors: groupthink, deference to the highest-paid person in the room, the availability bias that made everyone focus on the most recent competitive threat while ignoring the structural one, the sunk cost attachment to a technology investment that should have been written off.

A Real Pre-Mortem

A startup was considering pivoting from a B2B SaaS product to a platform model. The leadership team was enthusiastic — the platform opportunity looked enormous, and several large customers had expressed interest in building on top of the product. They asked the AI to write the failure post-mortem.

The post-mortem was bracing. It identified several failure chains:

Chain 1: The pivot required rebuilding the product’s architecture to support third-party extensions. The estimated timeline was six months. The actual timeline, as with all architectural rewrites, was fourteen months. During those fourteen months, the existing B2B product received minimal investment. Three key customers churned. Revenue declined 30%. By the time the platform launched, the company did not have the runway to acquire platform developers.

Chain 2: The “interest” from large customers was expressed by their innovation teams, who had no procurement authority. When the platform launched, the actual buyers — IT departments — had security and compliance concerns that took another eight months to address. The innovation teams had moved on to other projects.

Chain 3: The team assumed that building a platform was primarily a technical challenge. In fact, it was primarily a go-to-market challenge: building a developer ecosystem requires community management, documentation, developer relations, and a fundamentally different sales motion. The team had none of these capabilities and did not budget for them.

The cognitive factors: The team was anchored on the platform opportunity’s total addressable market without rigorously assessing their probability of capturing meaningful share. They were influenced by survivorship bias — they could name five successful platform pivots (Slack, Shopify, Stripe) but could not name fifty failed ones, because failed pivots do not get written about. The CEO had publicly stated the platform vision at a conference, creating commitment escalation pressure.

The startup did not cancel the pivot. But they restructured it: they maintained investment in the core B2B product during the transition, they validated buyer (not innovator) interest before committing to the full rebuild, and they hired a developer relations lead before writing the first line of platform code. The pre-mortem did not change the decision. It changed the implementation of the decision in ways that addressed the specific failure modes the team had been unable to articulate.

Adversarial Strategy Testing

Pre-mortems address internal failure modes. Adversarial strategy testing addresses external threats — competitors, market shifts, regulatory changes — that are hard to reason about because they require you to think from someone else’s perspective.

The alien minds technique from Chapter 11 is directly applicable:

You are the CEO of [specific competitor]. You have just learned about our strategy to [specific strategy]. Describe:

  1. Your immediate competitive response. What can you do in the next 90 days to counter this?
  2. Your medium-term response. How do you adjust your 12-month roadmap?
  3. What advantages do you have that make you well-positioned to respond?
  4. What would you do to make our strategy actively backfire — not just fail, but leave us worse off than if we had done nothing?

Question 4 is the one that most teams never ask. It is one thing to consider that a competitor might match your move. It is another to consider that a competitor might use your move against you — that your strategy might create an opening for them that would not have existed otherwise.

For example, a B2B company considering a price reduction to gain market share might find, through this exercise, that their main competitor — with deeper pockets and lower costs — would welcome a price war because it would exhaust the smaller company’s margins while barely affecting the larger one. The price reduction would not just fail to gain share; it would accelerate the smaller company’s cash depletion while funding the larger company’s customer acquisition. The strategy would make the competitor’s life easier, not harder.

This is obvious in retrospect. It is not obvious in the planning meeting where everyone is excited about the growth projections from the price reduction model.

Multi-Competitor Scenario Mapping

For complex competitive landscapes, you can scale the adversarial perspective exercise:

Here is our market and the five major competitors. For each competitor, write a one-page strategic memo as if you were their head of strategy, responding to our planned move. Each memo should reflect that competitor’s specific strengths, weaknesses, culture, and likely priorities. Then write a synthesis: given all five likely responses, what is the actual competitive landscape 12 months after we execute this strategy?

The synthesis is the crucial step. Individual competitive responses are useful but incomplete — the real strategic landscape emerges from the interaction of multiple actors’ responses. Competitor A’s response might create an opportunity for Competitor C that would not exist otherwise, which in turn affects your position in ways that no single-competitor analysis would reveal.

AI is not going to be right about any specific competitor’s response. It lacks inside information, and competitive strategy depends heavily on personalities and internal dynamics that are not publicly available. But the exercise of thinking through multiple interacting responses is valuable regardless of accuracy, because it forces you to see the strategic landscape as a dynamic system rather than a series of bilateral relationships.

Hypothesis Generation for Decision Spaces

Strategic decisions are often framed too narrowly. “Should we enter market X?” has two answers. “What are all the ways we could approach market X, and what are the conditions under which each would be the right choice?” has a much richer answer set.

The hypothesis generation technique from Chapter 15 maps directly:

We are considering [strategic question]. Before we evaluate options, I want to map the full decision space. Generate a comprehensive list of strategic options, including:

  1. The obvious options we have probably already considered
  2. Options that combine elements of the obvious options in non-obvious ways
  3. Options that a team in our position would typically not consider because of industry convention, cognitive bias, or organizational constraints
  4. The “do nothing” option, articulated honestly (not as a straw man but as a genuine strategic choice with its own logic)
  5. Options that would require capabilities we do not currently have but could develop or acquire

For each option, state the key assumption that must be true for it to be the best choice.

The last instruction — stating the key assumption — transforms a list of options into a set of testable hypotheses. Instead of debating which option is best (which quickly becomes a contest of rhetoric and authority), the team can debate which assumptions are most likely to be true (which is an empirical question that can often be at least partially tested).

A consumer products company used this approach when considering how to respond to a new direct-to-consumer competitor. The obvious options were: launch their own DTC channel, acquire the competitor, reduce prices in retail, or increase marketing spend. The AI-generated option list included several they had not considered:

  • Strategic partnership with the competitor (rather than competing or acquiring, use their DTC capability while they use your supply chain — a complementary arrangement that neither side could achieve alone)
  • Selective retreat (deliberately cede the low-margin DTC segment and concentrate on the premium retail segment where the competitor has no brand equity)
  • Platform play (offer their supply chain and logistics as a service to multiple DTC brands, turning a competitive threat into a new revenue line)

These were not brilliant ideas that no human could have generated. They were ideas that the team’s framing — “how do we beat this competitor?” — had excluded. The competitive framing made partnership and retreat psychologically unavailable. The AI, which has no competitive ego, generated them without difficulty.

The team ultimately pursued a version of the selective retreat combined with a premium repositioning, which was the option their own framing would never have produced because it felt like losing. It was not losing. It was choosing the battlefield.

Scenario Planning at Scale

Traditional scenario planning, as developed by Shell in the 1970s, involves constructing a small number of contrasting future scenarios (typically two to four) and developing strategies that are robust across all of them. The bottleneck is always scenario construction: it requires a diverse group of thoughtful people working for days to construct scenarios that are genuinely different from each other and from the consensus forecast.

AI can compress the scenario construction phase dramatically. Not because AI scenarios are better than human-constructed ones — they tend to be less rich in detail and less grounded in industry-specific knowledge — but because you can generate a much larger initial set and then use human judgment to select and refine the most interesting ones.

Generate twelve distinct scenarios for [industry/market] in [time horizon]. Each scenario should:

  1. Be internally consistent — the elements of the scenario should reinforce each other
  2. Be plausible — not science fiction, but things that could actually happen given current trends and uncertainties
  3. Be different from the consensus forecast in at least one significant way
  4. Include a brief narrative of how the world got from here to there — the causal chain
  5. Identify which current assumptions it violates

After generating all twelve, categorize them by which key uncertainty they explore (e.g., regulatory, technological, demographic, competitive, macroeconomic) and identify any important uncertainty dimensions that are not represented.

The categorization step is important. It reveals the dimensions of uncertainty, not just the scenarios themselves. If eight of twelve scenarios explore technological uncertainty and none explore regulatory uncertainty, that tells you something about the AI’s (and probably your) attention allocation. The missing dimensions are often the most strategically important, precisely because they are the ones nobody is thinking about.

A healthcare company used this approach and discovered that none of their initial scenarios — human or AI-generated — addressed the possibility of a major pharmaceutical patent cliff occurring simultaneously with a shift in payer reimbursement models. Each of these was considered separately in their planning, but the combination created a scenario where their entire pricing strategy became untenable. It was the interaction between two known uncertainties that had never been considered together.

Career Decisions: The Personal Strategic Case

The techniques in this chapter are not limited to organizational strategy. They apply with equal force to personal strategic decisions — career changes, geographic moves, educational investments — where the same cognitive biases operate but with the added intensity of personal identity and anxiety.

Career decisions are especially vulnerable to several biases:

  • Status quo bias: The known career feels safe even when it is not
  • Loss aversion: The potential losses from a change loom larger than the potential gains
  • Identity attachment: “I am a [job title]” makes changing feel like losing part of yourself
  • Social proof: You do what people like you do, which keeps you in a predictable trajectory
  • Narrative bias: You construct a story about your career that makes the next step feel inevitable, when in fact the decision space is much wider

The pre-mortem technique is particularly powerful for career decisions because it forces you to articulate failure modes you are avoiding:

It is three years from now. I took the [new job/career change/risk]. It has gone badly. Write the story of what happened. Be specific about:

  1. What I underestimated about the transition
  2. What skills or relationships I lost that turned out to be more valuable than I realized
  3. What assumptions about the new role/industry/city turned out to be wrong
  4. What personal factors (not just professional) contributed to the failure

And equally important, the reverse:

It is three years from now. I stayed in my current role. It has gone badly. Write the story of what happened. Be specific about:

  1. What opportunities I missed by not moving
  2. What happened to my motivation and growth
  3. What external changes made my “safe” choice less safe than it appeared
  4. What I told myself to justify staying, and how those justifications look in retrospect

Running both pre-mortems side by side is clarifying because it reveals that there is no risk-free option. Staying is not safe — it has its own failure modes, which status quo bias makes invisible. Leaving is not reckless — it has specific, identifiable risks that can be mitigated. When both paths are equally risky, the question shifts from “should I take the risk?” to “which risks am I better equipped to manage?” — which is a much more tractable question.

The Structural Advantage of AI in Strategic Thinking

I want to be explicit about why AI is particularly well-suited to strategic thinking, beyond the general cognitive perturbation value we have discussed throughout this book.

Strategic reasoning fails in humans for specific, identifiable reasons. AI does not share most of them:

Organizational politics. In any organization, strategy is intertwined with power. Suggesting that the CEO’s pet project should be cancelled is career-limiting regardless of its strategic merits. AI has no career. It will cheerfully explain why the pet project is strategically indefensible. This does not mean you can use the AI’s analysis directly — you still need to navigate the politics. But knowing the unvarnished strategic truth gives you a foundation that internal analysis cannot provide.

Sunk cost attachment. Humans are terrible at abandoning investments they have already made. The $50 million already spent on a project creates a gravitational pull that distorts all future analysis of that project. AI processes sunk costs as what they are: spent money that is irrelevant to future decisions. When you ask an AI “given everything we have invested, should we continue?” the AI reads “should we continue?” and evaluates forward-looking factors only. This is what economists say we should do. It is not what humans actually do.

Career anxiety. Many strategic recommendations are shaped by the recommender’s career risk rather than the organization’s strategic interest. The safe recommendation is to do something — anything — rather than nothing, because inaction is harder to defend than action, even when inaction is correct. AI does not have a career and will recommend inaction when inaction is the strategically sound choice, which is more often than most organizations are willing to admit.

Identity protection. Organizations develop identities — “we are an innovation company” or “we are a premium brand” — that constrain strategic thinking. Strategies that are inconsistent with the organization’s self-image are literally unthinkable. AI does not share the organization’s self-image and can generate strategies that the organization would consider heretical. Whether to pursue those strategies is a judgment call, but at least they are on the table.

Consensus pressure. Strategic planning in groups converges toward the least objectionable option rather than the best option. AI is not subject to consensus pressure and will maintain a heterodox position if the analysis supports it. This makes it a useful check on group dynamics: if the team’s consensus strategy differs significantly from the AI’s analysis, that difference is worth exploring — not because the AI is right, but because the difference might indicate that consensus pressure has distorted the team’s reasoning.

None of this means AI is good at strategy. It means AI is differently bad at strategy — bad in ways that are complementary to human cognitive weaknesses rather than identical to them. The combination of human strategic judgment (which understands context, relationships, and implementation in ways AI cannot) and AI strategic perturbation (which is immune to the political and psychological factors that corrupt human judgment) is more reliable than either alone.

A Framework for AI-Augmented Strategic Decision Making

Bringing the techniques together into a practical workflow:

1. Frame the decision. Before involving AI, write down the decision as you currently understand it. Include the options you are considering, the criteria you are using, and the timeline. This is your starting mental model.

2. Expand the option space. Use hypothesis generation to identify options you have not considered. Pay particular attention to options that violate your assumptions or your organization’s identity.

3. Stress-test each option. For the top three to five options, run adversarial analysis: competitive response simulation, pre-mortem analysis, and assumption identification.

4. Map the scenarios. Generate a diverse set of future scenarios and evaluate each option against them. Identify which options are robust (perform acceptably across most scenarios) versus which are fragile (perform brilliantly in one scenario and terribly in others).

5. Identify the key assumptions. For each viable option, state the assumption that must be true for it to work. Then assess: can you test this assumption before committing? If so, design the test. If not, assess your confidence honestly.

6. Make the decision. This step is yours. The AI has expanded your option space, stress-tested your assumptions, and revealed your blind spots. The decision itself requires judgment about context, timing, relationships, and implementation that AI cannot provide. Decide.

7. Define the reversal triggers. Before implementing, state the conditions under which you will reconsider. “If customer acquisition cost exceeds $X by month six, we revisit.” This is your pre-commitment to rationality in the face of sunk cost pressure.

The entire process can be completed in a day for most strategic decisions. It does not replace deep industry expertise, market research, or stakeholder analysis. It supplements the strategic reasoning that happens after all of that data has been gathered — the reasoning that is most vulnerable to cognitive bias because it happens in the mind of a human who has preferences, fears, and a career to protect.

The Uncomfortable Conclusion

If you have used these techniques honestly, you will occasionally arrive at strategic conclusions you do not like. The pre-mortem will reveal that the strategy you are emotionally committed to has serious vulnerabilities. The adversarial analysis will show that a competitor is better positioned than you want to believe. The hypothesis generation will surface an option — retreat, pivot, sell — that feels like failure.

This is the technique working, not failing. The purpose of AI-augmented strategic thinking is not to confirm your existing plans. It is to see the strategic landscape as it is, not as you wish it were. What you do with that clear-eyed view is a matter of judgment, courage, and circumstance. But you cannot make a good decision about a reality you refuse to perceive.

The AI does not care about your feelings. In strategic reasoning, that is its most valuable feature.