The Filter Bubble Nobody Intended
In 2011, Eli Pariser published The Filter Bubble: What the Internet Is Hiding from You, and introduced a concept that has become so embedded in our cultural vocabulary that most people can deploy the phrase “filter bubble” without ever having read the book.
This is itself a minor example of the headline illusion we discussed in the last chapter, because what Pariser actually argued is more nuanced and more interesting than the popular version suggests.
The popular version goes something like this: algorithms show you content you agree with, which traps you in an echo chamber, which makes you more extreme, which is destroying democracy.
The actual argument is more complex, and — more importantly for our purposes — the actual problem is bigger than politics and more difficult to solve than “just read diverse sources.”
Understanding the real dynamics of filter bubbles is essential for building an effective information triage system, because any system that helps you consume information more efficiently risks making the bubble problem worse, not better.
So let’s start by getting the facts right.
What Pariser Actually Argued
Pariser’s core observation was that the internet had undergone a fundamental shift from a broadcast medium (everyone sees the same thing) to a personalized medium (everyone sees something different).
Google’s search results, he noted, were different for different people searching the same term. Facebook’s news feed showed different content to different users. Amazon’s recommendations varied based on purchase history.
This personalization, Pariser argued, was invisible to the user. You didn’t know what you weren’t seeing. You couldn’t compare your information environment to someone else’s, because you didn’t have access to their information environment.
The result was a kind of informational solitary confinement: you were locked in a world built from your own past behavior, unable to see beyond it and unable to even perceive the walls.
What He Got Right
Personalization is real and pervasive. This was true in 2011 and is dramatically more true now. Every major information platform personalizes its content delivery based on user behavior. The degree of personalization varies — some platforms are more aggressive than others — but the trend is universal and accelerating.
The personalization is largely invisible. Most users don’t understand how much their information environment has been shaped by algorithms, and platforms have little incentive to make this transparent. You see your feed, your results, your recommendations, and they feel like “the internet.”
They’re not. They’re your internet, curated specifically for you based on your past behavior.
There are real consequences to living in a personalized information environment. When different people see different facts, different stories, and different perspectives, they lose the shared informational foundation that makes productive disagreement possible. This is a genuine problem for democratic society, and Pariser was prescient in identifying it.
What the Evidence Hasn’t Fully Supported
The degree of algorithmic filtering. Several studies since 2011 have found that algorithmic filter bubbles are less hermetic than Pariser suggested.
A 2015 study by Facebook researchers (take the source with appropriate skepticism) found that algorithmic filtering reduced exposure to cross-cutting news by about 5-8%, while users’ own choices reduced it by about 15-20%.
In other words, your own behavior filters more than the algorithm does.
The novelty of the problem. Pariser wrote as though personalized information environments were a new phenomenon created by algorithms. In reality, people have always lived in information bubbles.
Your choice of newspaper, your social circle, your neighborhood, your profession, and your education all shaped what information you encountered long before algorithms existed. Algorithms made the filtering more efficient and more invisible, but they didn’t create the underlying dynamic.
The implied solution. Pariser’s framing suggested that the problem could be solved by making algorithms more transparent or by requiring platforms to show users more diverse content.
Subsequent research has shown that exposure to diverse content doesn’t reliably change people’s minds and can sometimes backfire, making people more entrenched in their existing views.
The problem is more deeply rooted than algorithmic tweaks can address.
Optimization, Not Conspiracy
One of the most important things to understand about filter bubbles is that they’re emergent, not designed.
No one at Google, Facebook, or any other platform sat down and said, “Let’s trap people in ideological echo chambers.”
What happened was more mundane and more difficult to fix.
The platforms optimized for engagement.
Engagement — clicks, time on page, shares, comments, return visits — is the metric that drives revenue. So the platforms built algorithms to maximize engagement. And these algorithms discovered, through billions of iterations of testing and optimization, that people engage more with content that:
- Confirms their existing beliefs
- Aligns with their interests
- Matches the emotional register of content they’ve previously engaged with
- Comes from sources they’ve previously trusted or interacted with
- Is similar to content their social connections have engaged with
None of these findings are surprising. They’re basic human psychology, rediscovered at scale by machine learning systems that were told to maximize a number and found the most effective way to do so.
The filter bubble is what happens when you optimize for engagement over a long enough period. Each recommendation makes your next set of recommendations slightly more like what you’ve already consumed. Over months and years, this iterative narrowing creates an information environment that feels comprehensive (there’s always plenty to read) but is actually quite constrained.
It’s worth pausing on the “feels comprehensive” part, because it’s the key to why filter bubbles are so insidious.
If your information environment felt narrow, you’d notice and take corrective action. But it doesn’t feel narrow. It feels like the whole world. There’s always something new in your feed, always something interesting in your recommendations, always something to read or watch or listen to.
The bubble isn’t empty; it’s full.
It’s just full of a specific slice of reality that has been selected to match your existing patterns.
This is optimization, not conspiracy. The algorithm doesn’t have an agenda; it has a metric. And the metric, applied relentlessly at scale, produces filter bubbles as a natural byproduct.
Understanding this matters for two reasons.
First, it means you can’t solve the problem by switching platforms. Every engagement-optimized platform will produce the same effect, because the optimization itself is the cause.
Second, it means the problem won’t be solved by “ethical algorithms” or “responsible AI,” because the basic dynamic — that people engage more with familiar, confirmatory content — is a feature of human psychology, not a bug in the technology.
You Are Your Own Best Filter Bubble
Here’s the part that nobody likes to hear: the algorithm is the smaller filter. You are the bigger one.
Every time you choose to click on one article rather than another, you’re filtering.
Every time you follow someone on social media, you’re filtering.
Every time you subscribe to a newsletter, join a community, or attend a conference, you’re filtering.
And your filtering is more aggressive and more biased than any algorithm, because it’s driven by the full force of your identity, your social group, your professional training, and your emotional responses.
The research supports this.
A 2020 study by Nyhan and colleagues, published in Nature, analyzed the web browsing behavior of 1.2 million Americans and found that the vast majority of people’s information diets were shaped primarily by their own choices rather than by algorithmic filtering.
People self-selected into partisan news sources, sought out like-minded commentators, and avoided content that challenged their views — all without any algorithmic assistance.
This makes sense if you think about it. Before the internet, filter bubbles were built from newspaper subscriptions, television viewing habits, social circles, and professional communities.
People in the 1960s didn’t need Facebook to live in information bubbles; they had their neighborhood, their church, their union or country club, and their preferred newspaper, all of which reinforced a particular view of the world.
Algorithms didn’t create the human tendency to seek out confirmatory information. They accelerated it. They made the filtering faster, more complete, and more invisible. But the engine driving the filtering is human nature, and that’s a harder problem to solve than adjusting an algorithm.
This has a practical implication that is crucial for the rest of this book: any information triage system you build must account for your own filtering biases, not just algorithmic ones.
If you optimize your information consumption for efficiency — reading only what’s most relevant, filtering out what seems unimportant, relying on trusted sources — you’ll tighten your bubble, not loosen it.
Efficiency and diversity are, to some degree, in tension, and managing that tension is one of the central challenges of information triage.
Ideological Bubbles vs. Informational Bubbles
When people hear “filter bubble,” they usually think of political polarization: liberals seeing only liberal news, conservatives seeing only conservative news.
This is the most visible and most discussed form of the phenomenon, but it’s arguably not the most important one.
There’s a different kind of filter bubble that’s less dramatic but more pervasive and, for most people reading this book, more directly relevant to their professional lives: the informational bubble.
An informational bubble isn’t about ideology; it’s about what you’re aware of. It’s the set of facts, frameworks, tools, methods, and developments that you encounter in your normal information diet, contrasted with the vast set of potentially relevant things you don’t encounter because they’re outside your usual channels.
Here’s an example.
A software developer who primarily reads Hacker News, follows tech Twitter, and subscribes to a few engineering newsletters will have a detailed picture of the tech industry’s current concerns: programming languages, AI developments, startup culture, software architecture.
They will have a much patchier picture of adjacent fields that could be deeply relevant to their work:
- Cognitive science (how users actually think)
- Organizational behavior (why teams succeed or fail)
- Regulatory environments (what laws might affect their product)
- Domain expertise in whatever field their software serves
This isn’t because they’ve been algorithmically shielded from those fields. It’s because their information channels — the communities they belong to, the people they follow, the publications they read — are organized around a discipline rather than a problem.
They see the world through the lens of software development, and the things that don’t pass through that lens become invisible.
The same dynamic applies to every professional community.
Doctors see the world through a medical lens and may miss the social and economic factors that drive health outcomes. Economists see the world through economic models and may miss the psychological and cultural factors that economic models abstract away. Lawyers see the world through legal frameworks and may miss the practical realities that legal reasoning can’t capture.
These informational bubbles are, in many ways, more dangerous than ideological ones, because they’re invisible to the people inside them.
If you’re in a political echo chamber, someone will eventually tell you. Your uncle at Thanksgiving, your college roommate on Facebook, the comment section of any news article — the existence of other political perspectives is hard to avoid entirely.
But if you’re in a professional informational bubble, the things you’re not seeing don’t announce themselves. You don’t know what you don’t know, and nothing in your environment prompts you to find out.
Professional Communities as Invisible Bubbles
Let’s spend some time on the specific ways professional communities create filter bubbles, because these are the bubbles most likely to affect the people reading this book, and they’re the ones least likely to be recognized.
Shared vocabulary as a filter.
Every professional community develops its own vocabulary — jargon, acronyms, shorthand, terms of art. This vocabulary serves a legitimate purpose (precision, efficiency) but also functions as a filter.
Content that uses the community’s vocabulary gets through; content that doesn’t is marked as “not for us” and filtered out.
A paper on organizational behavior might be deeply relevant to a software team leader, but if it’s written in the language of management science rather than the language of software engineering, the team leader will never see it and wouldn’t recognize its relevance if they did.
Citation networks as bubbles.
Academic fields are organized around citation networks: papers cite other papers, which creates clusters of related work. If you follow the citations, you stay within the cluster.
Ideas from outside the cluster have to fight their way in, which means they need a champion within the field who recognizes their relevance and translates them into the field’s vocabulary.
Many potentially valuable cross-domain connections are never made, simply because nobody happened to be standing at the intersection of the two relevant citation networks.
Conference circuits as echo chambers.
Professional conferences gather people with similar backgrounds, interests, and perspectives. The talks confirm and extend the community’s existing knowledge rather than challenging its foundations. The networking reinforces existing connections rather than creating new ones.
Conferences feel intellectually stimulating because everyone is talking about the latest developments, but “the latest developments” are all within the same paradigm.
Paradigm-challenging ideas don’t get conference talks; they get rejected by the program committee.
Hiring patterns as bubble maintenance.
Companies and teams tend to hire people with similar backgrounds, which perpetuates the informational bubble at the organizational level.
A team of engineers will keep hiring engineers. An economics department will keep hiring economists. The homogeneity of training and perspective that results isn’t just a diversity issue; it’s an information issue.
The team literally cannot see things that would be visible to someone with a different background.
Tool-driven worldviews.
The tools you use shape what you can see and what you think is important.
If your primary analytical tool is a spreadsheet, everything looks like it should be quantified. If your primary tool is a programming language, everything looks like it should be automated. If your primary tool is a legal framework, everything looks like it should be regulated.
The tool becomes a lens, and the lens creates a bubble.
Abraham Maslow said it better than I can: “If the only tool you have is a hammer, everything looks like a nail.” He was describing professional filter bubbles before the concept existed.
Epistemic Closure and What It Costs
There’s a philosophical concept that’s useful here: epistemic closure. In its informal usage (distinct from the technical epistemological meaning), it refers to a state where a community’s information environment becomes so self-contained that it can explain away any external challenge to its beliefs.
Epistemic closure doesn’t require censorship or deliberate suppression of dissent. It just requires a sufficiently rich internal ecosystem of sources, experts, arguments, and evidence that the community can always find support for its existing views within its own bubble.
When a challenge arrives from outside, the community has ready-made counter-arguments, alternative experts, and internal sources that rebut the challenge — not through deliberate conspiracy, but through the accumulated weight of a self-referential information ecosystem.
You’ve seen this in political contexts: every political community has its own experts, its own think tanks, its own media outlets, its own set of “well-established facts” that are contested or unknown outside the community.
But the same dynamic operates in professional and intellectual communities.
The Replication Crisis as a Case Study
Consider the replication crisis in psychology.
For decades, the psychology community had an internal information ecosystem — journals, conferences, textbook narratives, teaching traditions — that supported a set of findings that turned out to be unreliable.
The community wasn’t lying; it was operating within an information environment where the unreliable findings were repeatedly cited, taught, and reinforced, while the warning signs (low statistical power, publication bias, failure to replicate) were marginalized or explained away using internal community norms.
It took an influx of outsiders — statisticians who applied more rigorous methods, early-career researchers who were less invested in the existing findings, and scientists from other fields who imported different methodological standards — to break the epistemic closure.
And even then, the process was slow, contentious, and bitterly resisted by many insiders who had built careers on the now-questionable findings.
The 2008 Financial Crisis as a Case Study
The 2008 financial crisis was, in part, a failure of informational bubbles: the quantitative risk modelers, the mortgage originators, the rating agencies, and the regulators were each operating in their own informational bubble, and none of them could see the full picture that would have revealed the systemic risk.
The economists had models that said the housing market was fine.
The bankers had incentives that said the securities were profitable.
The regulators had frameworks that said the system was solvent.
Each bubble contained true-enough information; the catastrophe lived in the gaps between them.
The Core Cost
The cost of epistemic closure is that you can be internally consistent and externally wrong.
Your information environment makes sense. Your sources agree with each other. Your experts confirm your understanding.
And you’re still wrong, because the truth is outside the bubble and nothing inside the bubble is pointing at it.
This is the deepest danger of filter bubbles, and it has nothing to do with algorithms or social media. It’s a property of any sufficiently self-contained information ecosystem, and human beings have been building these ecosystems for as long as there have been communities of thought.
The Bubbles That Aren’t Political
Let’s make this concrete with some examples of filter bubbles that don’t map to the familiar left-right political axis. These are the bubbles most likely to affect your professional life, and they’re the ones you’re least likely to notice.
The Tech Industry Bubble
The technology industry has one of the most powerful filter bubbles in any professional community.
If you work in tech and consume tech-focused media, you inhabit a world where:
- Technology is the primary driver of social change (rather than one factor among many)
- Disruption is generally positive (rather than often destructive)
- Scale is a virtue (rather than sometimes a liability)
- Data-driven decision making is superior to other forms of judgment (rather than appropriate in some contexts and misleading in others)
- The latest framework, language, or paradigm is probably better than what it replaces (rather than being a lateral move with different tradeoffs)
- Most problems are fundamentally engineering problems (rather than social, political, or economic problems with engineering components)
None of these beliefs is entirely wrong, but none is entirely right either. They’re the implicit assumptions of a community, and they become invisible to people inside the community because everyone around them shares the same assumptions.
The tech bubble is also remarkably insular in its sources. A relatively small number of publications, podcasts, social media accounts, and community forums dominate the information diet of most tech workers.
Ideas circulate rapidly within this ecosystem and rarely make contact with perspectives from outside it. When they do — when a social scientist critiques a tech company’s practices, or a regulator proposes constraints — the internal community often dismisses the critique as coming from someone who “doesn’t understand technology.”
This dismissal is itself a symptom of the bubble.
Academic Field Silos
Academic disciplines are among the most thoroughly bubbled communities in existence. The specialization that makes academic research productive also creates profound informational isolation.
A researcher in computational linguistics and a researcher in theoretical syntax are both studying language. They attend different conferences, publish in different journals, cite different literatures, use different methods, and may hold mutually contradictory beliefs about fundamental questions in their shared domain.
Neither is wrong in the straightforward sense, but each has a partial picture that they mistake for the whole.
The problem is worse across disciplinary boundaries.
An economist studying healthcare and a public health researcher studying healthcare costs will have almost completely non-overlapping information environments, despite studying closely related questions.
The economist reads economics journals, attends economics conferences, and frames the problem in terms of incentives, markets, and efficiency. The public health researcher reads public health journals, attends public health conferences, and frames the problem in terms of epidemiology, access, and equity.
Each has crucial insights the other is missing, and the institutional structure of academia provides almost no mechanism for combining them.
Industry-Specific Groupthink
Every industry develops a conventional wisdom — a set of shared beliefs about what works, what matters, and what’s true.
This conventional wisdom is transmitted through industry publications, conference talks, consulting frameworks, and the hiring and promotion practices that select for people who share it.
Finance has its efficient market hypothesis (or its behavioral finance critique, depending on which sub-community you’re in). Management consulting has its portfolio of frameworks. Healthcare has its evidence-based medicine hierarchy. Education has its pedagogical theories.
Each of these represents a productive tradition of thought, but each also creates blind spots that are invisible from the inside.
The classic example is the management literature’s decades-long romance with “best practices.” The concept — that successful organizations have identifiable practices that can be isolated and replicated — seems obvious from inside the management community.
But it’s been powerfully challenged by researchers outside the community, who’ve pointed out that the studies purporting to identify best practices are riddled with survivorship bias, reverse causation, and halo effects.
The challenge hasn’t penetrated the management community’s filter bubble, where “best practices” remains a largely unquestioned concept.
Why “Just Read Diverse Sources” Is Insufficient
The standard advice for dealing with filter bubbles is to diversify your information diet: read sources you disagree with, follow people outside your usual circle, seek out perspectives from other disciplines.
This advice is correct in the sense that diversifying your information diet is better than not doing it.
It’s insufficient in several important ways.
You don’t know what you don’t know.
The most damaging filter bubbles are the ones you can’t see. If you’re a software engineer who doesn’t know about the relevant research in organizational psychology, you can’t “just read” organizational psychology, because you don’t know it’s relevant.
The unknown unknowns are, by definition, invisible to you. Telling someone to diversify their information diet is like telling someone to look for their blind spots by looking harder.
The whole point of a blind spot is that you can’t see it by looking.
Reading without understanding isn’t diversifying.
Casually reading a source from outside your field or perspective doesn’t give you access to the knowledge that source’s community has. If you read one paper on organizational psychology without the context of the field — its methods, its debates, its accumulated findings — you’ll either misunderstand it or dismiss it.
True intellectual diversity requires enough depth to actually comprehend and evaluate perspectives that differ from your own, and that depth takes time and effort that most people don’t have.
Exposure doesn’t equal updating.
Research on political communication consistently shows that merely exposing people to opposing views doesn’t change their minds and can sometimes backfire, making people more entrenched in their existing views.
A 2018 study by Christopher Bail and colleagues at Duke found that Twitter users who were exposed to opposing political views for a month became more extreme in their own views, not less.
The mechanism appears to be identity threat: encountering opposing views activates defensive reasoning rather than open-minded evaluation.
There’s reason to think the same dynamic operates in professional and intellectual bubbles. A software engineer who reads a critique of technology-solutionism might dismiss it as coming from someone who doesn’t understand technology, rather than engaging with the substance of the argument.
The exposure happened, but the updating didn’t.
The diversity of your sources is limited by the diversity of your comprehension.
You can only benefit from diverse sources if you can understand and evaluate them. This requires some baseline familiarity with the vocabulary, methods, and norms of the communities those sources come from.
Without that baseline, diverse sources just look like noise, and you’ll filter them out — not algorithmically, but cognitively.
Time constraints create an impossible tradeoff.
Given finite time for information consumption (which, as we discussed in Chapter 2, is quite limited), every hour spent reading outside your primary domain is an hour not spent going deeper in your primary domain.
There’s a real cost to diversification, and the benefits are uncertain and long-term. This means that even well-intentioned efforts to diversify tend to be abandoned when deadlines loom and the immediate demands of one’s primary work reassert themselves.
What Actually Works
If “just read diverse sources” is insufficient, what does work?
The honest answer is that there’s no easy fix, but there are approaches that are more effective than others.
Build bridges, not breadth.
Rather than trying to read broadly across many fields, identify one or two adjacent fields that are most likely to contain insights relevant to your work. Then invest enough time to develop basic literacy in those fields — enough to understand the vocabulary, the methods, and the major debates.
This is a bigger investment than casual reading but produces much greater returns, because it gives you the context needed to actually benefit from cross-disciplinary exposure.
Seek out translators.
In every domain, there are people who specialize in translating insights across field boundaries. Popular science writers, cross-disciplinary researchers, consultants who work across industries — these people are doing the hard work of making ideas from one community accessible to another.
They’re imperfect filters (every translator introduces biases and simplifications), but they’re far more useful than raw exposure to unfamiliar sources.
Use disagreement productively.
Instead of just reading sources you disagree with, seek out the strongest possible version of views that differ from yours. This is the principle of charitable interpretation, sometimes called “steelmanning.”
Find the smartest, most thoughtful advocate of a position you disagree with and engage with their strongest arguments, not a strawman version.
This is hard. It’s cognitively expensive. And it’s far more valuable than reading a dozen weak versions of opposing views.
Conduct periodic audits of your information diet.
Once a quarter, take an honest inventory of where your information comes from:
- What publications do you read?
- What people do you follow?
- What communities do you belong to?
- What perspectives are overrepresented?
- What perspectives are absent?
This audit won’t automatically fix the problem, but awareness is a prerequisite for action.
Deliberately cultivate relationships outside your professional community.
This isn’t information consumption advice; it’s life advice that happens to have information benefits.
Having genuine friendships and professional relationships with people who have different backgrounds, different training, and different perspectives gives you access to their information environment in a way that no amount of reading can replicate.
When a friend who’s a nurse tells you about how healthcare actually works, you learn things you’d never find in the publications you normally read. When a neighbor who’s an electrician explains building codes, you understand regulation differently than you would from a policy paper.
The most valuable information channel is often a person, not a publication.
Use AI as a bubble-detection tool.
This is one of the more promising applications of AI for information triage, and we’ll discuss it in detail later in the book.
AI systems can analyze your information diet and identify gaps, suggest sources you wouldn’t normally encounter, translate concepts between fields, and flag when your understanding of a topic might be incomplete or skewed.
They’re not perfect at this — AI systems have their own biases — but they can see patterns in your consumption that are invisible to you.
The Uncomfortable Implication
There’s an uncomfortable implication lurking in this chapter that I should make explicit rather than leaving it submerged.
If filter bubbles are primarily created by your own behavior rather than by algorithms, then the solution has to involve changing your behavior, not just your tools.
And changing behavior — especially behavior driven by identity, social belonging, and cognitive comfort — is genuinely hard.
Most of us don’t want to read things that make us uncomfortable.
We don’t want to engage seriously with perspectives that threaten our professional identity or challenge the foundations of our expertise.
We don’t want to invest the time required to develop literacy in an adjacent field when we could be going deeper in our own.
These aren’t character flaws; they’re rational responses to real constraints.
But the cost of not doing these things is significant. The cost is that your understanding of the world — your ability to make good decisions, to anticipate consequences, to see opportunities that others miss — is bounded by the walls of a bubble you didn’t choose and can’t fully see.
The tools and systems we’ll build in Parts II and III of this book are designed to make this easier. They can’t eliminate the discomfort of engaging with unfamiliar perspectives, but they can:
- Reduce the time and effort required
- Automate some of the bubble-detection
- Create workflows that systematically introduce diversity into your information diet without requiring constant conscious effort
- Translate content from unfamiliar fields into terms you can evaluate
But the tools only work if you’re willing to be surprised.
If you approach information triage with the goal of finding more of what you already believe, you’ll build a more efficient bubble.
If you approach it with genuine curiosity about what you might be missing, you’ll build something much more valuable: a system that actively works against your natural tendency to filter, that shows you things you wouldn’t seek out on your own, and that helps you develop the kind of broad-but-deep understanding that filter bubbles make so difficult.
That’s the goal. It’s not easy. But it’s more achievable now than at any previous point in history, because the same AI technologies that can tighten bubbles can also be used to burst them — if you know how to use them.
The rest of this book is about how.
Key Takeaways
-
Eli Pariser’s filter bubble concept is real and important, but the popular version overstates the role of algorithms and understates the role of human behavior in creating bubbles.
-
Filter bubbles emerge from optimization for engagement, not from conspiracy. Any platform that optimizes for engagement will produce filter effects as a natural byproduct.
-
Your own choices — what you click, who you follow, what communities you join — create a larger filter than any algorithm. This means the solution must involve behavior change, not just better technology.
-
Informational bubbles (what you’re aware of) are often more consequential than ideological bubbles (what you believe) and are much harder to detect.
-
Professional communities create powerful filter bubbles through shared vocabulary, citation networks, conference circuits, hiring patterns, and tool-driven worldviews.
-
Epistemic closure allows communities to be internally consistent while externally wrong — a state that’s stable and self-reinforcing.
-
“Just read diverse sources” is insufficient because you don’t know what you don’t know, reading without understanding isn’t diversifying, exposure doesn’t equal updating, and time constraints make broad diversification unsustainable.
-
More effective approaches include building bridges to adjacent fields, seeking out translators, steelmanning opposing views, auditing your information diet, cultivating diverse relationships, and using AI as a bubble-detection tool.