Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

The Narrowing of Serendipity

In 1928, Alexander Fleming returned from vacation to find that a mold had contaminated one of his petri dishes and killed the bacteria around it. He had not been looking for an antibiotic. He was studying staphylococci.

The discovery of penicillin was serendipitous — an accident that changed the course of human history because someone was paying attention to something they had not planned to see.

In 1941, George de Mestral went for a walk with his dog and noticed burrs clinging to the dog’s fur. He looked at them under a microscope, saw tiny hooks, and spent the next decade developing Velcro. He had not been looking for a fastening system. He was walking his dog.

In 1965, Arno Penzias and Robert Wilson pointed a radio antenna at the sky and heard a persistent hiss they could not eliminate. It was not a malfunction. It was the cosmic microwave background — the afterglow of the Big Bang. They had been trying to detect radio signals, not prove the origin of the universe.

These stories are so familiar they feel like cliches. But they encode a deep truth about how knowledge advances: some of the most important discoveries happen when people encounter information they were not looking for, in contexts they did not expect, and connect it to problems they were not consciously trying to solve.

Now consider what happens when your information environment is curated by an algorithm whose entire purpose is to show you exactly what you are looking for.

What Serendipity Actually Is

Before we mourn serendipity’s decline, we should be precise about what we mean by it.

Serendipity is not randomness. If you randomly sample information from the universe of all possible information, you will mostly get noise. A random page from a random book in a random library is unlikely to be useful.

Serendipity is not the absence of curation — it is a different kind of curation, one that is loose enough to allow unexpected connections but structured enough to keep you in the neighborhood of useful information.

The word itself was coined by Horace Walpole in 1754, inspired by a Persian fairy tale called “The Three Princes of Serendip.” The princes were “always making discoveries, by accidents and sagacity, of things they were not in quest of.”

The important phrase is “accidents and sagacity” — both elements are required. The accident provides the unexpected encounter. The sagacity provides the ability to recognize its significance.

Serendipity sits in a sweet spot between two failure modes.

Too much curation, and you only see what you already expect — no accidents, no unexpected connections, no chance of stumbling onto something transformative.

Too little curation, and you drown in noise — the accidents are too random to be meaningful, and even sagacity cannot extract signal from pure chaos.

The physical world, for most of human history, occupied this sweet spot naturally. A library organized by the Dewey Decimal System put related books near each other, but you had to physically walk past unrelated sections to reach the one you wanted.

A newspaper editor curated the content, but the physical layout of the page meant your eyes would scan past stories you did not specifically seek. A conference was organized around a theme, but the hallway conversations introduced you to people and ideas outside your specific session.

These environments were serendipity engines. Not by design (the Dewey Decimal System was not trying to promote accidental discovery) but by the structural properties of physical information spaces.

When information has physical form, accessing the thing you want requires passing through the space where unexpected things live.

Digital information has no such requirement. When you search for exactly what you want and get exactly what you searched for, there is no space between your intention and its fulfillment.

The digital environment can be perfectly curated, perfectly responsive, perfectly optimized — and in being perfect, it eliminates the imperfections where serendipity lives.

The Physical World’s Serendipity Infrastructure

It is worth cataloging the serendipity infrastructure that the physical world provided, because we lost it so gradually that most people do not realize it existed.

Bookstores and libraries. The physical act of browsing shelves exposed you to books you did not know existed. You walked into a bookstore looking for a specific title and walked out with three unrelated books that caught your eye from an adjacent shelf.

The spatial organization of knowledge meant that related-but-unexpected content was physically adjacent to the content you sought. I cannot count the number of important books I have read because they happened to be shelved next to the book I was actually looking for.

Online bookstores eliminated this. Amazon’s “customers who bought this also bought” feature is the algorithmic replacement for the adjacent shelf, but it operates on similarity rather than spatial proximity.

It shows you books that are like the book you want, not books that happen to be nearby in a classification system. The adjacent shelf might have had a book from a completely different field that happened to be classified nearby. Amazon’s algorithm will never show you that.

Newspapers. The physical newspaper was a masterpiece of serendipitous design, though no one thought of it that way.

You opened the paper to read about politics and your eyes scanned past a science story, a business feature, and an obituary that mentioned someone who lived a remarkable life. The physical layout forced broad exposure. You could not read the front page without also seeing the sidebar. You could not turn to the sports section without passing through the international news.

Digital news gives you exactly the section you want. Apple News learns that you like technology and politics and stops showing you arts, science, and business.

Your news consumption becomes a narrow channel where the physical newspaper was a broad river. The news you do not know you need never appears on your screen, because the algorithm has learned that you do not engage with it.

Academic conferences. The most valuable part of any conference has always been the hallway track — the informal conversations between sessions, the random encounters at the coffee station, the dinner with people from different sessions.

The formal program brings together people who share a specific interest. The hallway introduces those people to each other’s other interests, to questions from adjacent fields, to perspectives they would never encounter in their curated reading.

Virtual conferences during the pandemic eliminated the hallway track, and the loss was devastating. Zoom sessions were efficient — no travel, no jet lag, no overpriced hotel coffee — but the serendipitous encounters disappeared entirely.

People attended exactly the sessions on their calendar and spoke to exactly the people they already knew. The cross-pollination stopped.

Physical workplaces. The water cooler conversation is a cliche because it is real. Bumping into a colleague from a different department and learning what they are working on is how organizational knowledge spreads laterally.

The physical office, with its shared spaces and forced encounters, provided a serendipity infrastructure that remote work largely eliminates. (This is not an argument against remote work, which has enormous benefits. It is an argument for deliberately replacing the serendipity that remote work removes.)

City streets. Jane Jacobs wrote about the serendipity of urban sidewalks in the 1960s — the way that walking through a diverse neighborhood exposes you to businesses, people, activities, and ideas you did not seek out.

The suburban car culture that replaced walkable urbanism also replaced this serendipity with point-to-point transportation: you drive from your house to your destination and see nothing in between except other cars.

The pattern across all these examples is the same: physical environments impose a cost of access that includes incidental exposure to unexpected information.

You cannot get to what you want without passing through the space where serendipity lives. Digital environments remove that cost, and serendipity disappears as a side effect of efficiency.

The “Adjacent Possible” and Why It Matters

Stuart Kauffman coined the term “adjacent possible” in the context of biological evolution. It describes the set of things that could exist next, given what exists now.

A single-celled organism cannot evolve into an elephant in one step, but it can evolve into a slightly different single-celled organism. The adjacent possible is the frontier of what is reachable from where you are.

Steven Johnson adapted the concept for innovation: new ideas emerge from the adjacent possible of existing ideas.

The printing press was in the adjacent possible of movable type, wine-press technology, and paper manufacturing — it combined elements that already existed. Television was in the adjacent possible of radio technology, cathode ray tubes, and film. The internet was in the adjacent possible of packet switching, time-sharing computers, and existing telecommunications infrastructure.

The critical insight is that the adjacent possible is larger than any one person can see. You know some of the elements that exist. I know some of them. The person in the next department knows others.

Innovation often happens when someone encounters an element they did not know existed and connects it to a problem they have been working on. The connection is only possible because of the accidental encounter.

Algorithmic curation narrows the adjacent possible by limiting your exposure to known interests. If the algorithm only shows you content related to what you have engaged with before, you see only the elements you already know about.

The novel elements — the ones that could combine with your existing knowledge to produce something new — are filtered away as irrelevant.

This is not a theoretical concern. The rate of interdisciplinary innovation depends on people from different fields encountering each other’s ideas. When information curation silos each field into its own algorithmic bubble, the cross-field encounters that drive innovation become less frequent.

Nobody notices, because you do not miss discoveries you never made.

But the cumulative effect is an innovation environment that is incrementally less creative, less surprising, and less capable of the paradigm-shifting breakthroughs that come from unexpected connections.

A researcher in computational biology might revolutionize their field by encountering a concept from network theory. But if their information diet is curated to show them computational biology papers, they will never encounter the network theory concept.

It is in their adjacent possible — it could combine with their existing knowledge to produce something new — but the algorithm has made it invisible.

The adjacent possible is the space of potential discoveries. Algorithmic curation is progressively closing that space, one personalization decision at a time.

The “You Might Also Like” Problem

Recommendation systems are designed to show you more of what you already like. This sounds helpful. It is helpful, for entertainment.

If you enjoyed a mystery novel, you will probably enjoy similar mystery novels, and Amazon’s recommendation helps you find them.

But for professional and intellectual growth, “you might also like” is a trap. Growth requires encountering things that are different from what you already know, not things that are similar to what you already know.

The recommendation system that perfectly predicts your preferences is the recommendation system that never expands them.

The mathematical reason is straightforward. Recommendation algorithms optimize for predicted engagement. Engagement prediction is based on similarity to past engagement. So the algorithm recommends content that is maximally similar to content you have engaged with before.

This creates a shrinking radius of recommendations: each round of recommendations is similar to the last round, which was similar to the round before that, and the center of the recommendation space is wherever your initial preferences happened to be.

Over time, your recommendations converge to a point — a single, narrow band of content that the algorithm has determined is “you.”

Your Spotify Discover Weekly sounds increasingly like your existing playlists. Your YouTube recommendations become a hall of mirrors. Your Amazon suggestions cluster around the same narrow product categories. The algorithm has learned your preferences with exquisite precision and is now serving them back to you with exquisite fidelity.

What it is not doing is expanding those preferences.

It has no incentive to show you something you might not like, because “might not like” translates to “lower predicted engagement,” which translates to “worse algorithm performance.” Showing you something from outside your preference cluster is, from the algorithm’s perspective, a bad recommendation — even if encountering that content would expand your thinking, introduce you to a new field, or provide the unexpected connection that leads to your best idea.

The contrast with human recommenders is instructive. When a friend recommends a book, they might say, “This is not your usual thing, but I think you would find it fascinating.”

They are making a recommendation that an algorithm would never make — one based on a model of your intellectual capacity and growth potential, not just your past consumption.

The friend recommends for who you could become. The algorithm recommends for who you have been.

The Cost to Specialized Professionals

The serendipity problem is most acute for specialists — people whose professional value depends on deep expertise in a specific domain.

The better you are at your specialty, the more aggressively the algorithm optimizes your information diet for that specialty, and the more thoroughly it eliminates the cross-domain inputs that could transform your work.

Consider a machine learning engineer. Their search history, reading habits, and social media follows all signal deep interest in ML. The algorithm obliges: more ML papers, more ML blog posts, more ML conference talks.

The engineer’s information diet becomes a pure, uncut stream of machine learning content. Sounds ideal, right?

Except some of the most productive developments in ML have come from outside ML.

Attention mechanisms were inspired by cognitive science research on human visual attention. Generative adversarial networks borrowed the concept of adversarial dynamics from game theory. Reinforcement learning techniques drew on behavioral psychology. Graph neural networks borrowed from algebraic topology.

The field’s most creative advances came from people who were steeped in ML but also exposed to ideas from outside it.

The specialist whose information diet has been algorithmically purified to contain only their specialty is cut off from these cross-pollination opportunities. They can still deepen their expertise within the field — and the algorithm ensures they will — but they lose the breadth that makes depth productive.

They know everything about the hammer but have never encountered a wrench.

This is the paradox of algorithmic curation for specialists: the better the curation, the worse the outcome.

A perfect ML-only feed makes you a perfect ML-only thinker. An imperfect feed — one that occasionally shows you ecology, economics, art history, or structural engineering — makes you a more creative ML thinker, because creativity depends on having diverse inputs to combine.

The same pattern applies across professions. A doctor who only reads medical content misses the operations research that could optimize their clinic’s scheduling. A lawyer who only reads legal content misses the behavioral economics that could improve their negotiation strategy. A product manager who only reads product content misses the supply chain research that could inform their operations.

The specialist needs their specialty — but they also need the random walk through adjacent fields that algorithmic curation has eliminated.

Designing for Serendipity

If serendipity is valuable and algorithmic curation is destroying it, the obvious question is: why not design algorithms that promote serendipity instead of eliminating it?

Some researchers and platforms have tried. The concept is called “serendipity-oriented recommendation” or “diversity-aware recommendation,” and the basic idea is to inject controlled randomness or intentional diversity into recommendation streams.

Instead of showing you the ten items most similar to your past behavior, show you eight similar items and two that are deliberately different — a random article from a different domain, a perspective you have not encountered, a source outside your usual network.

The idea is sound. The implementation is fiendishly difficult, for several reasons.

Serendipity is hard to measure. If you show a user an unexpected item and they ignore it, was the recommendation bad, or did the user miss something valuable?

You cannot measure the counterfactual — what would have happened if the user had engaged with the unexpected item. Engagement metrics are easy to measure but, as we have discussed, do not capture value.

Serendipity creates value precisely because it leads to outcomes that were not predictable in advance, which makes it resistant to the kind of metric-driven optimization that recommendation systems excel at.

Users say they want serendipity but behave otherwise. In surveys, users express interest in discovering new things. In practice, they click on the familiar.

Optimizing for stated preferences (novelty, diversity) conflicts with optimizing for revealed preferences (similarity, comfort). Most platforms optimize for revealed preferences because those are what the engagement metrics capture.

The line between serendipity and noise is subjective. A recommendation that feels serendipitous to one user feels random and irrelevant to another.

The ML engineer who encounters a cognitive science paper and has an insight feels the thrill of serendipity. Their colleague who encounters the same paper and has no idea what to do with it feels that the algorithm is broken.

Serendipity requires the recipient’s sagacity — their ability to recognize and use the unexpected — and this varies enormously between individuals and between contexts.

Platforms have no incentive to optimize for serendipity. Serendipity, when it works, creates value for the user. But it does not reliably create value for the platform.

The platform monetizes engagement. Serendipitous recommendations reduce short-term engagement (because the user is less likely to click on something unfamiliar) even if they increase long-term value.

No publicly-traded company will sacrifice this quarter’s engagement metrics for the possibility that some users might have transformative insights next year.

So serendipity-oriented design remains a research topic rather than a product feature. The platforms that would benefit most from promoting serendipity — professional information tools, academic databases, news aggregators — have the least incentive to do so, because their business models reward engagement, not insight.

The Difference Between Noise and Productive Randomness

Not all randomness is serendipitous. A random page from a phone book is not going to give you a breakthrough insight.

The challenge of designing for serendipity is introducing randomness that is productive — unexpected but connectable, unfamiliar but relevant to something you care about.

Productive randomness has a few characteristics that distinguish it from noise.

It is from outside your domain but connected to your problems. The cognitive science paper that inspires the ML engineer is productive because it addresses the same underlying problem (how to allocate attention) from a different perspective.

A random article about cat grooming would not have the same effect (usually). The randomness needs to be cross-domain, not cross-universe.

It is from a credible source. Serendipitous discoveries work when the unexpected content is trustworthy and substantive. A random blog post by someone with no expertise is noise. A peer-reviewed paper from an adjacent field is potential serendipity.

The quality of the unexpected input matters.

It requires a prepared mind. Louis Pasteur’s observation that “chance favors the prepared mind” applies precisely here. Serendipity is not just encountering unexpected information — it is encountering it while having the background knowledge to recognize its relevance.

This means that serendipity is most productive for people who are already deeply knowledgeable in their own field, because they have the mental framework needed to connect the unexpected input to their existing problems.

It arrives in a context that supports reflection. A serendipitous encounter while scrolling Twitter at high speed is less likely to produce insight than the same encounter while browsing a bookshelf at leisure.

Serendipity requires cognitive space — the mental bandwidth to notice the unexpected thing, hold it in mind, and explore its connections. High-speed, algorithmically- optimized content feeds work against this by encouraging rapid consumption and immediate judgment.

Understanding these characteristics helps distinguish strategies that might actually increase productive serendipity from strategies that just add random noise to your information diet.

Strategies for Reintroducing Controlled Randomness

Since the platforms will not do it for you, you have to engineer your own serendipity.

Here are strategies that actual humans have used, with actual results, in actual professional contexts. None of them require giving up the efficiency of digital tools. They require supplementing that efficiency with deliberate encounters beyond the algorithm’s reach.

The random journal strategy. Once a month, go to a university library website, find a database of academic journals, and read the table of contents of a journal in a field unrelated to yours.

Not the full articles — just the titles and abstracts. You are scanning for problems and approaches that rhyme with your own.

A civil engineer might scan a neuroscience journal and notice that network connectivity patterns look similar to traffic flow patterns. A product designer might scan an epidemiology journal and notice that disease transmission models look similar to feature adoption models.

Most months, nothing comes of it. Occasionally, something transforms a project.

The controlled-follow strategy. On social media, deliberately follow three to five people who work in fields unrelated to yours but who seem thoughtful and interesting.

Not influencers — practitioners. A soil scientist, a stage lighting designer, a medieval historian, a logistics analyst.

Their posts will occasionally disrupt your algorithmically-curated feed with content you did not expect. Most of it will not be relevant. Some of it will create the unexpected connections that pure-domain feeds eliminate.

The bookstore strategy. Physical bookstores still exist, and they remain serendipity engines. Visit one with no specific purchase in mind.

Browse sections you would not visit online. Pick up books based on their covers, their titles, or their physical proximity to something else that caught your eye.

The bookstore’s spatial organization provides exactly the incidental exposure that digital browsing eliminates. Yes, I am recommending you leave the house. The algorithm cannot follow you to a bookshelf.

The cross-team conversation strategy. In organizations, serendipity often comes from talking to people outside your immediate team.

Have lunch with someone from a different department. Attend a meeting you were not invited to (ask first, obviously). Join a Slack channel for a different project.

The organizational equivalent of the hallway track has to be deliberately created in remote-work environments, because it does not happen naturally when everyone is in their own curated digital space.

The historical strategy. Read historical accounts of how ideas developed in your field. Not textbook histories that present the clean, linear narrative, but messy, detailed accounts that show the wrong turns, the accidents, and the unexpected influences.

You will discover that many of the foundational ideas in your field came from serendipitous cross-pollination, and the specific cross-pollination paths will suggest analogies you might pursue today.

The inverse search strategy. After completing a search for something specific, do a search for something tangentially related but in a different field.

If you just searched for “database indexing strategies,” follow it with a search for “library cataloging systems” or “warehouse inventory organization.”

The algorithmic connection between these topics is weak (different user populations, different engagement patterns), but the conceptual connection might be strong. You are manually creating the adjacent-possible exposure that the algorithm refuses to provide.

The deliberate subscription strategy. Subscribe to one newsletter, podcast, or blog that is genuinely outside your domain. Not adjacent to your domain — actually outside it.

If you work in tech, subscribe to something about architecture, agriculture, or astrophysics. Commit to reading or listening for at least a month before evaluating whether it is useful.

The first few weeks will feel like wasted time. That feeling is the discomfort of encountering genuinely new information, and it is exactly the sensation that algorithmic curation has trained you to avoid.

The question-first strategy. Instead of searching for answers, start by articulating questions.

“How do other fields solve the problem of scaling human review processes?” “What does resilience look like in biological systems?” “How did pre-digital organizations handle information overload?”

Starting with the question rather than the search forces you to think about the abstract shape of your problem, which makes it easier to recognize solutions from unexpected domains.

The Innovation Imperative

The narrowing of serendipity is not just a personal inconvenience. It is an innovation problem at the civilizational level.

Major innovations overwhelmingly come from the combination of ideas across domains.

The transistor combined solid-state physics with electrical engineering. CRISPR combined microbiology with genetics. The internet combined computer science with telecommunications.

In each case, someone had to encounter an idea from outside their primary domain and recognize its relevance to their work. That encounter was either serendipitous (they stumbled onto it) or facilitated by an environment that promoted cross-domain exposure (interdisciplinary conferences, university departments with shared hallways, general-interest scientific journals).

If algorithmic curation progressively eliminates cross-domain exposure, the rate of combinatorial innovation will decline.

Not suddenly — the people who already have broad knowledge will continue to produce innovative combinations. But the next generation, whose intellectual development happens entirely within algorithmically-curated environments, will have narrower inputs and therefore narrower outputs.

The innovation frontier will contract, not because anyone is less intelligent, but because the information infrastructure that feeds intelligence has become too efficient at giving people exactly what they want and too inefficient at giving them what they do not yet know they need.

This is difficult to measure, because you cannot count innovations that did not happen. You cannot survey people about connections they did not make.

The evidence is necessarily indirect: studies showing that interdisciplinary research produces higher-impact findings, studies showing that diverse teams outperform homogeneous ones, studies showing that exposure to diverse perspectives improves creative problem-solving.

All of these findings point in the same direction: the breadth of inputs matters, and algorithmic curation is narrowing those inputs.

The counter-argument is that the internet provides access to more information than any previous technology. Anyone can read anything. The diversity is there; the algorithm just helps you find what you need.

This is technically true and practically misleading. Access is not the same as exposure.

The information exists, but if the curation layer makes it invisible, its existence is academic. A book in a library you never visit provides no serendipity. A webpage in a search result you never see provides no serendipity.

The availability of information is a necessary condition for serendipity, not a sufficient one.

What We Are Optimizing Away

Let me put this in the starkest terms I can.

Every time a recommendation algorithm learns your preferences more precisely, it becomes slightly better at giving you what you want and slightly worse at giving you what you need but do not yet know you need.

Every time a search engine ranks results more relevantly, it becomes slightly better at answering your question and slightly worse at exposing you to the questions you should be asking.

Every time a news feed personalizes more accurately, it becomes slightly better at showing you the news you care about and slightly worse at showing you the news you should care about.

The trend line is clear: more personalization, more optimization, more precision. And with each increment of precision, a corresponding decrement of serendipity.

We are optimizing our information environments for efficiency, and the cost of that efficiency is the elimination of the productive inefficiency where discoveries live.

This is not a call to abandon personalization or return to the pre-internet information landscape. That landscape had its own problems — it was slow, expensive, geographically constrained, and deeply unequal in who had access to what.

The algorithmic curation that is narrowing serendipity has also democratized access to information in ways that are genuinely transformative and worth preserving.

The call is for balance. For recognizing that efficiency and serendipity are in tension, and that a fully-optimized information environment is not a fully-productive one.

For building personal practices that reintroduce the controlled randomness that algorithms have eliminated.

For designing organizations that create space for unexpected encounters.

For evaluating information tools not just on their precision but on their capacity for surprise.

The firehose of information that this book is about learning to manage is not just a volume problem. It is also a diversity problem.

You can drown in a river of the same water flowing past you over and over, each wave algorithmically selected to be maximally similar to the last. The solution is not less water — it is different water.

Water from tributaries you did not know existed, carrying sediment from landscapes you have never visited, eroding assumptions you did not know you held.

Serendipity is not a luxury. It is infrastructure — the infrastructure of insight, of creativity, of the prepared mind encountering the unexpected and recognizing it as important.

We dismantled that infrastructure without noticing, because the thing that replaced it was so much more efficient. Now the task is to rebuild it, deliberately, within the optimized environments we inhabit.

The algorithm will not help. It has other priorities.

You have to go looking for what you are not looking for. And yes, that is exactly as paradoxical as it sounds.