Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Conceptual Blending Across Domains

In 1928, Alexander Fleming went on vacation and left a petri dish uncovered. A mold spore drifted in and killed the bacteria. Fleming noticed and, crucially, connected: the mold’s bactericidal properties could be a medicine. The discovery of penicillin is usually told as a story about luck, but it’s really a story about conceptual blending — the ability to see a structural connection between two things that don’t obviously belong together (a contaminant in a laboratory and a treatment for disease).

Most breakthrough ideas have this structure. They’re not generated from within a single domain by thinking harder about that domain’s existing concepts. They emerge from the collision of concepts from different domains — when someone notices that a pattern in one field maps onto an unsolved problem in another. The history of ideas is, to a striking degree, the history of these cross-domain connections.

The problem is that humans are terrible at making them systematically. We can make them accidentally (Fleming’s petri dish) or through the slow accumulation of multidisciplinary expertise (a career spent working across fields). But we can’t reliably generate cross-domain connections on demand. Our knowledge is organized into silos — what we know about immunology is stored separately from what we know about organizational design, even though the structural parallels are deep.

AI doesn’t have this problem. Its knowledge isn’t siloed. Concepts from every domain exist in the same representational space, and the model can traverse between them freely. This makes AI a natural engine for conceptual blending — arguably the most powerful creative application of large language models, and the one least explored by people who use AI primarily for writing and coding.

Fauconnier and Turner’s Blending Theory

Before we get to prompts and techniques, it’s worth understanding the theoretical foundation. Gilles Fauconnier and Mark Turner’s conceptual blending theory, developed in the 1990s and laid out in their 2002 book The Way We Think, provides the most rigorous framework for understanding how cross-domain connections work.

The theory identifies four mental spaces in a conceptual blend:

Input Space 1: The concepts and structure from the first domain. For example, the immune system — with its concepts of self/non-self distinction, adaptive response, memory, and distributed defense.

Input Space 2: The concepts and structure from the second domain. For example, cybersecurity — with its concepts of authentication, intrusion detection, incident response, and defense in depth.

Generic Space: The abstract structure that the two input spaces share. In our example: both are systems that must distinguish between legitimate and illegitimate actors, both must respond to threats that are constantly evolving, both must balance sensitivity (catching threats) against specificity (not disrupting legitimate activity).

Blended Space: The new conceptual structure that emerges from the blend — ideas, approaches, and insights that exist in neither input space individually but arise from their combination.

The blended space is where the magic happens. It’s not just a metaphor or analogy — it’s a new conceptual structure that can generate ideas that neither domain would produce on its own.

Here’s what makes AI useful for this: identifying the generic space — the abstract structural similarities between two domains — is the hardest part of conceptual blending. It requires holding both domains in mind simultaneously and finding the mapping between them. Humans can do this, but it’s cognitively expensive and unreliable. AI can do it systematically, rapidly, and with access to a much larger set of domain knowledge than any individual human possesses.

The Core Blending Prompt

Here’s the prompt pattern I use for systematic conceptual blending:

I want to perform a structured conceptual blend between two domains.

DOMAIN A: [First domain, described in detail]
Key concepts in Domain A:
- [Concept 1]
- [Concept 2]
- [Concept 3]
- [Characteristic problems and solutions]

DOMAIN B: [Second domain, described in detail]
Key concepts in Domain B:
- [Concept 1]
- [Concept 2]
- [Concept 3]
- [Characteristic problems and solutions]

Please perform the following analysis:

1. STRUCTURAL MAPPING: What are the deep structural parallels between
   these two domains? Not surface similarities — structural ones. What
   roles, relationships, and dynamics in Domain A correspond to roles,
   relationships, and dynamics in Domain B?

2. GENERIC SPACE: What is the abstract structure that both domains share?
   Describe it in domain-neutral terms.

3. NOVEL INSIGHTS: Based on the structural mapping, what insights from
   Domain A could generate new approaches in Domain B? And vice versa?
   Be specific — don't just note similarities, generate actionable ideas.

4. DISANALOGIES: Where does the mapping break down? What's importantly
   different between the domains that limits the usefulness of the blend?
   This is as important as the similarities.

The fourth step — disanalogies — is crucial and often omitted. Without it, conceptual blending degenerates into loose metaphor. “Organizations are like organisms!” Sure, in some ways. In other ways, they’re nothing alike, and treating them as alike in those dimensions produces bad thinking. The disanalogies tell you where the blend is informative and where it’s misleading.

Worked Example 1: Immune System + Cybersecurity

Let me walk through a full blend to show what this produces.

Domain A: The Immune System

  • Innate immunity (fast, general, first line of defense)
  • Adaptive immunity (slow, specific, learned from exposure)
  • Self/non-self distinction via molecular markers
  • Immune memory (faster response to previously encountered threats)
  • Autoimmune disorders (system attacks itself)
  • Tolerance mechanisms (preventing response to benign non-self)
  • Distributed architecture (no central command)

Domain B: Cybersecurity

  • Perimeter defense (firewalls, network boundaries)
  • Intrusion detection systems
  • Authentication and authorization
  • Threat intelligence and signature matching
  • False positives and alert fatigue
  • Incident response and recovery
  • Security operations center (centralized monitoring)

Structural mapping (AI output, condensed):

The deepest structural parallel is that both systems face an adversary that evolves to circumvent defenses. This creates an arms race dynamic that makes any static defense eventually obsolete.

Mapping:

  • Innate immunity maps to perimeter defense and basic security controls. Both are fast, general, and catch common threats but miss novel ones.
  • Adaptive immunity maps to… actually, this is where cybersecurity has a significant gap. Most cybersecurity systems don’t have a true adaptive component that automatically develops specific defenses against new threats based on exposure. They have signature updates, which are more like vaccinations administered by an external party than true adaptive immunity.
  • Self/non-self distinction maps to authentication, but with a crucial difference: the immune system’s self-markers are inherent to every cell. In cybersecurity, authentication is a checkpoint you pass through, not an inherent property of every packet.
  • Immune memory maps to threat intelligence databases. Both enable faster recognition of previously encountered threats.
  • Autoimmune disorders map to false positives and overly aggressive security controls that block legitimate activity.
  • The immune system’s distributed, decentralized architecture maps poorly to cybersecurity’s typical centralized SOC model. This is a disanalogy that might be informative.

Novel insights generated:

  1. The missing adaptive layer. Cybersecurity has innate-equivalent defenses (firewalls, basic rules) and externally-administered “vaccines” (signature updates), but it lacks a true adaptive immunity analogue — a system that automatically develops specific defenses to novel threats based on direct exposure. This suggests a design direction: what would a cybersecurity system look like that could encounter a new attack pattern, develop a specific detection/response mechanism for it, and then deploy that mechanism — all without human intervention?

  2. Inherent vs. checkpoint authentication. The immune system doesn’t have “checkpoints” where cells prove their identity. Instead, every cell carries molecular markers that continuously identify it as self. What if network packets carried inherent, continuous authentication markers rather than proving identity only at perimeter checkpoints? This maps roughly to zero-trust architecture principles but suggests an even more radical approach: cryptographic “self-markers” embedded in every packet at creation.

  3. Tolerance mechanisms. The immune system has sophisticated mechanisms for tolerating benign non-self (like gut bacteria). Cybersecurity systems are notably bad at this — they tend to treat all unknown activity as equally suspicious, leading to alert fatigue. An immune-inspired approach would develop explicit “tolerance” for known-benign anomalous patterns.

  4. Decentralized immune response. The immune system operates without central command — individual immune cells make local decisions based on local information. Cybersecurity’s reliance on centralized SOCs creates a single point of failure and a bottleneck. What would a decentralized security architecture look like, where individual endpoints have the autonomy and capability to mount local defensive responses?

Disanalogies:

  • The immune system evolved over hundreds of millions of years. You can’t evolve a cybersecurity system — you have to design it. Evolutionary optimization is not a viable approach for security architecture.
  • The immune system can tolerate significant collateral damage (killing infected cells). Cybersecurity systems generally cannot tolerate “killing” legitimate processes or data.
  • Biological adversaries (pathogens) don’t have intelligence or strategic planning. Cyber adversaries do. The arms race dynamic is fundamentally different when your adversary can observe your defenses and strategically engineer around them.

This is a genuinely useful blend. Insight #1 — the missing adaptive layer — points toward a real gap in current cybersecurity architecture. Insight #4 — decentralized response — challenges a fundamental assumption of most security operations. And the disanalogies prevent the blend from being taken too far.

Worked Example 2: Jazz Improvisation + Agile Software Development

Domain A: Jazz Improvisation

  • Shared musical vocabulary (scales, chord progressions, standard forms)
  • Real-time adaptation to what other musicians are playing
  • Tension between structure (the form) and freedom (the solo)
  • The “rhythm section” provides a stable foundation for experimentation
  • Trading fours/eights (musicians take turns soloing)
  • The head-solos-head structure (start with the melody, improvise, return)
  • “Listening” as the primary skill — responding to the emergent music

Domain B: Agile Software Development

  • Sprint cycles and iterative delivery
  • Cross-functional teams and collective ownership
  • User stories as shared vocabulary
  • Sprint reviews and retrospectives
  • Product backlog as evolving plan
  • The tension between planning and responding to change
  • Standup meetings as coordination mechanism

Structural mapping (AI output, condensed):

The core structural parallel is the management of structured improvisation — both are systems for producing creative work under real-time conditions where the outcome can’t be fully specified in advance.

Mapping:

  • Chord changes map to sprint goals — they provide the harmonic/directional framework within which improvisation happens.
  • The rhythm section (bass, drums, piano comping) maps to the platform/infrastructure team or the stable architectural foundation — it provides the steady base that makes experimentation by other players possible and safe.
  • Trading fours maps to pair programming or mob programming sessions where developers take turns driving.
  • “Listening” — the jazz musician’s primary skill — maps to… what, exactly? This is where the blend gets interesting. Agile talks about “responding to change” but doesn’t have a direct analogue for the intense, real-time listening that jazz musicians do. The closest agile practice is the standup, but a daily standup is a pale shadow of the continuous, real-time awareness that a jazz ensemble maintains.

Novel insights:

  1. The listening deficit. Jazz ensembles achieve coordination through continuous mutual awareness — every musician is simultaneously performing and listening to everyone else. Agile teams achieve coordination through periodic synchronization points (standups, reviews). This suggests that agile teams might be dramatically underinvesting in continuous awareness mechanisms. What would “always-on listening” look like for a software team? Shared IDE sessions? Continuous integration dashboards that everyone watches? Open audio channels?

  2. The rhythm section principle. In jazz, the quality of improvisation is directly proportional to the reliability of the rhythm section. A great rhythm section makes everyone sound better; a poor one makes everyone sound worse. Translated: the quality of feature development is directly proportional to the reliability of the underlying platform and infrastructure. Teams that underinvest in their “rhythm section” (CI/CD, testing infrastructure, developer experience) will see degraded “improvisation” (feature development) no matter how talented the “soloists” (developers) are.

  3. Head-solos-head structure. Jazz performances start with a clear statement of the theme (the head), then explore variations (solos), then return to the theme. This structure ensures that the audience (and the musicians) never lose sight of what the piece is about, even during extended improvisation. Agile sprints could adopt this more explicitly: start with a clear statement of the sprint’s “theme” (not just goals — the underlying intent), allow exploratory work in the middle, and end by explicitly reconnecting to the theme. This is subtly different from current sprint review practice, which evaluates whether goals were met rather than whether the sprint’s work cohered around a theme.

  4. Shared vocabulary depth. Jazz musicians spend years internalizing scales, chord voicings, and standard forms before they can improvise effectively. The depth of shared vocabulary directly determines the sophistication of the improvisation. What’s the equivalent for agile teams? Shared design patterns? Shared architectural principles? Shared understanding of the codebase? This suggests that the “onboarding” period for new team members — the period before they can productively “improvise” — is determined not by their individual skill but by the depth of shared vocabulary they’ve internalized.

Worked Example 3: Evolutionary Biology + Market Strategy

I’ll present this one more briefly to show the technique applied to a business context.

Key mapping: Both evolution and markets are selection environments where agents (organisms/companies) compete for resources, and successful strategies are retained and amplified while unsuccessful ones are eliminated.

The blend’s most valuable insight:

Evolution doesn’t optimize for “best” — it optimizes for “fit enough to survive in this specific environment.” When the environment changes, previously optimal organisms may go extinct while previously marginal organisms thrive. The equivalent in market strategy: optimizing for the current competitive environment is dangerous because it produces companies that are maximally adapted to conditions that may change. The organisms that survive environmental shifts are the ones with slack — unexploited capabilities that aren’t useful in the current environment but become critical when conditions change.

Actionable output: The blend suggests that companies should deliberately maintain capabilities that are currently unprofitable — not as charity, but as option value against environmental change. This is the biological equivalent of maintaining genetic diversity. A portfolio of “unfit” capabilities is an insurance policy against a future you can’t predict.

Key disanalogy: Organisms can’t choose to evolve; companies can choose to change. This means companies can adopt strategies that are unavailable to biological organisms — like deliberately self-disrupting, or investing in capabilities that natural selection would eliminate. The strategic implication is that companies should do things that “evolution” (competitive market pressure) would punish, precisely because the ability to do so is their advantage over pure evolutionary dynamics.

The Blend Selection Problem

Not all blends are useful. Connecting any two domains will produce some mapping, but many of those mappings are superficial — they note surface similarities that don’t generate useful insights. The challenge is selecting domain pairs that will produce generative blends.

Here are the criteria I use:

Structural depth. The two domains should share deep structural features, not just surface similarities. “Companies are like families” has some surface similarity (hierarchy, roles, conflicts) but limited structural depth. “Immune systems are like cybersecurity systems” has deep structural similarity (adversarial dynamics, evolving threats, detection/response cycles).

Asymmetric maturity. The most productive blends involve one domain that’s more theoretically mature or better understood than the other. The mature domain provides well-developed concepts and frameworks that can be imported into the less mature domain. Immunology is more theoretically mature than cybersecurity, which is why the blend direction (immunology -> cybersecurity) is more productive than the reverse.

Different optimization histories. Domains that have been optimized by different forces (evolution vs. engineering, individual practice vs. organizational process, physical constraints vs. information constraints) tend to produce richer blends because they’ve developed different solutions to structurally similar problems.

Sufficient distance. Domains that are too close (e.g., marketing and sales) produce trivially obvious mappings. Domains that are too far apart (e.g., quantum physics and cooking) produce mappings that are mostly metaphorical. The sweet spot is domains that are different enough to be surprising but similar enough to be informative.

You can use the AI to help select productive blend pairs:

I'm working on a problem in [YOUR DOMAIN]. I want to find conceptual
blends with other domains that might generate novel approaches.

Suggest 5 domains that share deep structural features with [YOUR DOMAIN]
but come from very different contexts. For each, explain:
1. What structural features they share
2. What the source domain has figured out that the target domain hasn't
3. A specific concept from the source domain that might transfer productively

Prioritize domains that are surprising — I already know about the
obvious analogues.

Evaluating Blend Quality

How do you know if a blend has produced genuine insight or just a clever-sounding metaphor? Here’s my assessment framework:

The Specificity Test. Does the blend generate specific ideas or just vague parallels? “Organizations are like organisms” is vague. “Organizations should maintain unprofitable capabilities as option value against environmental change, analogous to genetic diversity” is specific. If you can’t derive a specific action or design decision from the blend, it’s a metaphor, not an insight.

The Novelty Test. Does the blend tell you something you didn’t already know? If the insight from the blend is something you could have arrived at through straightforward thinking about your own domain, the blend isn’t adding value — it’s just providing a fancy way of stating the obvious.

The Robustness Test. Does the insight survive the disanalogies? Once you identify where the mapping breaks down, does the insight still hold? If the insight depends on a structural feature that’s present in the source domain but absent in the target domain, it doesn’t transfer.

The Mechanism Test. Can you identify a causal mechanism for why the insight from the source domain would work in the target domain? “This works in biology because of X; X is present in my domain because of Y; therefore it should work here” is a mechanism argument. “This works in biology so maybe it works here” is wishful thinking.

The So-What Test. The simplest and most brutal test. If the insight is true, what would you do differently? If the answer is “nothing,” the insight isn’t actionable and therefore isn’t useful for practical purposes, however intellectually interesting it might be.

Advanced Technique: Iterative Blending

Single-round blending produces useful results, but iterative blending — where you take the output of one blend and use it as input for further exploration — can go deeper.

Round 1: Perform the initial blend using the core prompt pattern.

Round 2: Take the most promising insight from Round 1 and drill into it:

In the previous blend, you identified [INSIGHT]. I want to develop this
further.

In [SOURCE DOMAIN], this concept has been developed extensively. What
are the detailed mechanisms, the known failure modes, and the edge cases?

Now map each of those details onto [TARGET DOMAIN]. Where does the
detailed mapping hold up, and where does it break down? What specific
design decisions or strategies does the detailed mapping suggest?

Round 3: Stress-test the developed insight:

We've developed [DETAILED INSIGHT] by blending concepts from [SOURCE]
and [TARGET]. Now I want to stress-test this.

1. What would someone deeply expert in [TARGET DOMAIN] object to about
   this insight? What domain-specific factors might make it inapplicable?
2. Is there evidence from [TARGET DOMAIN] that this approach has been
   tried and failed? If so, why?
3. What would a minimal experiment look like to test whether this insight
   actually works in [TARGET DOMAIN]?

This three-round process takes a conceptual blend from “interesting metaphor” to “testable hypothesis.” That’s the difference between intellectual entertainment and practical creativity.

The Role of AI in Blending

Let me be explicit about what the AI is doing here and what you’re doing.

The AI is good at: Identifying structural mappings between domains. It has absorbed concepts from essentially every field, and those concepts exist in the same representational space. The AI can traverse between domains in ways that would require years of multidisciplinary study for a human.

The AI is bad at: Evaluating whether a mapping is genuinely useful or merely clever. It will produce blends that sound impressive but don’t survive scrutiny. It will sometimes force a mapping where one doesn’t really exist, because it’s optimizing for a coherent response rather than for truth.

You are good at: Evaluating whether the AI’s mappings are relevant to your actual problem. You know your domain, your constraints, your goals. You can tell whether an insight from evolutionary biology is actually applicable to your market strategy or whether it just sounds applicable.

You are bad at: Generating the mappings in the first place. Your knowledge is siloed, your attention is limited, and you can’t hold two complex domains in mind simultaneously with enough fidelity to identify structural parallels.

The division of labor is clear: the AI generates, you evaluate. The AI proposes blends, you test them. The AI identifies mappings, you decide which ones matter. This is the fundamental pattern of effective human-AI collaboration for creative thinking, and it’s particularly pronounced in conceptual blending.

Common Mistakes in Conceptual Blending

Stopping at metaphor. “Business is like war” is a metaphor. It’s not a conceptual blend. A blend requires identifying specific structural mappings and using them to generate specific insights. If your blend doesn’t produce a concrete idea you didn’t have before, you haven’t blended — you’ve just analogized.

Ignoring disanalogies. Every blend has limits. If you don’t explicitly identify where the mapping breaks down, you’ll overapply the blend and make mistakes. The disanalogies are as informative as the analogies.

Blending domains you already know are connected. “Software development is like building construction” is a well-trodden mapping. You won’t find novel insights there because everyone in software has already mined that analogy for what it’s worth. The most productive blends connect domains that haven’t been previously connected — or at least haven’t been connected at a structural level.

Using the blend to confirm what you already believe. If you choose your source domain because you know it will support your existing approach, you’re not blending — you’re constructing a justification. Choose source domains that might challenge your approach as well as support it.

Treating the blend as proof. A blend can generate hypotheses. It cannot prove them. The fact that something works in immunology doesn’t mean it will work in cybersecurity. It means it’s worth testing in cybersecurity. The blend is a hypothesis generator, not an evidence generator.

A Library of Productive Blend Pairs

Based on extensive experimentation, here are domain pairs that consistently produce rich, generative blends. Use these as starting points.

  • Ecology + Organizational Design: Resource competition, niche construction, keystone species, ecosystem resilience.
  • Epidemiology + Information Spread: Transmission networks, superspreaders, herd immunity, vaccination (inoculation against misinformation).
  • Jazz + Team Coordination: Structured improvisation, shared vocabulary, listening, rhythm section.
  • Immunology + Security Architecture: Adaptive defense, self/non-self, tolerance, distributed response.
  • Evolutionary Biology + Product Strategy: Selection pressure, fitness landscapes, genetic drift, speciation.
  • Urban Planning + Software Architecture: Zoning, infrastructure, traffic flow, emergent vs. planned structure.
  • Cognitive Psychology + UX Design: Mental models, cognitive load, attention, habit formation.
  • Military Logistics + Supply Chain: Force projection, supply lines, fog of war, decentralized command.
  • Fermentation/Brewing + Culture Change: Starter cultures, environmental conditions, patience, irreversibility.
  • Mycology (fungal networks) + Communication Networks: Underground networks, resource sharing, resilience, decomposition.

Each of these pairs has structural depth — not just surface similarities but genuine parallels in dynamics, failure modes, and optimization strategies. They’re starting points for blends that can produce insights you wouldn’t reach by thinking within a single domain.

The prompts are in this chapter. The theory is sound. The technique is straightforward. The only thing it requires from you is a willingness to take seriously the idea that the best solution to your problem might already exist — in a domain you’ve never studied.