Mental Ruts, Fixation, and Einstellung
In 1942, Abraham Luchins sat down with a group of research participants and some imaginary water jars, and demonstrated something about the human mind that should, by rights, keep us all up at night.
The experiment was elegant in its simplicity. Participants were given problems that involved measuring out a specific quantity of water using three jars of known capacities. The first five problems all had the same solution: fill jar B, pour out enough to fill jar A once, then pour out enough to fill jar C twice. Mathematically: B - A - 2C. The problems were designed so that this method was the only efficient approach.
Then came the critical trials. Problems six and seven could be solved either by the now-familiar B - A - 2C method or by a dramatically simpler method — just A - C, or in some cases A + C. Two jars instead of three. One or two operations instead of four.
The results were striking. Among participants who had been trained on the first five problems, 83% used the complex B - A - 2C method on problem six, even though A - C was staring them in the face. Many participants, when presented with a problem that could only be solved by the simple method (B - A - 2C did not work), failed entirely. They could not see the simple solution because the complex solution occupied their entire mental field of vision.
Luchins called this Einstellung — a German word meaning “setting” or “attitude,” but carrying connotations of a fixed orientation, a mental posture that has locked into position. The term is precise. It is not that participants were confused about the mathematics. It is not that they lacked the ability to see the simple solution. It is that their minds had been set — oriented toward a particular approach with such thoroughness that alternatives were not merely unlikely but literally invisible.
Here is the detail that should trouble you most: when Luchins ran the experiment without the training problems — giving participants only the critical trials — almost everyone found the simple solution immediately. The solution was obvious. A child could see it. But adults who had been given five minutes of experience with a particular approach could not see it, because that experience had restructured their cognitive relationship to the problem space.
Five minutes. That is all it took to build a mental rut deep enough to trap an otherwise competent mind.
Now consider what twenty years of professional experience does.
Einstellung in the Wild
Luchins’ water jars are a laboratory demonstration, but the Einstellung effect is not a laboratory phenomenon. It is one of the most pervasive and consequential features of human cognition, and it operates in every domain where people develop expertise.
Chess
Merim Bilalic, Peter McLeod, and Fernand Gobet conducted a landmark study in 2008 that brought the Einstellung effect into sharp focus using expert chess players. They presented masters and grandmasters with chess positions that could be solved by a well-known tactical pattern (a smothered mate) or by a shorter, more efficient solution that did not involve the familiar pattern.
Using eye-tracking technology, Bilalic and colleagues showed that even when experts were explicitly told to look for the shorter solution, their eyes kept drifting back to the squares involved in the familiar pattern. Their visual attention — the physical movement of their eyes — was being pulled toward the known solution. They were not choosing to ignore the better solution; their perceptual system was literally not allowing them to see it. The familiar pattern was so strongly activated that it dominated their visual processing, filtering out information that did not conform to its template.
This was not happening to beginners who did not know any better. This was happening to chess masters — people who had spent thousands of hours training precisely to see multiple solutions to chess positions. Their expertise, the very thing that made them excellent chess players, was preventing them from seeing what was in front of them.
Medicine
The Einstellung effect in clinical medicine is well-documented and routinely lethal. Pat Croskerry, an emergency physician and leading researcher on diagnostic error, has spent decades cataloging the ways in which clinical expertise produces diagnostic fixation.
The pattern is consistent. A physician encounters a patient. Within seconds — often before the patient has finished describing their symptoms — the physician’s pattern-recognition system has generated a leading diagnosis. This diagnosis is usually correct. Emergency physicians, in particular, work in environments where rapid pattern recognition saves lives, and they are very, very good at it.
But when the initial diagnosis is wrong, something insidious happens. The physician begins to interpret all subsequent information through the lens of their initial diagnosis. Symptoms that confirm the diagnosis are noted and weighted heavily. Symptoms that disconfirm it are explained away, attributed to comorbidities, or simply not registered. Test results that are inconsistent with the diagnosis are flagged for repeat testing (“probably a lab error”). Test results that confirm it are accepted without scrutiny.
Croskerry calls this “anchoring and adjustment failure,” but it is fundamentally the Einstellung effect operating in a medical context. The physician’s mind has been set on a diagnosis, and that setting channels all subsequent cognitive processing. The physician is not being careless. They are often being extremely thorough — ordering additional tests, consulting colleagues, reviewing the literature — but all of this thoroughness is occurring within the frame established by their initial diagnostic anchor. They are being thorough in the wrong direction.
The research suggests that diagnostic error rates in medicine have remained stubbornly stable at around 10-15% for decades, despite enormous advances in medical technology, training, and evidence-based medicine. This stability makes sense if the primary source of error is not lack of knowledge or technology but the Einstellung effect — a feature of cognition that no amount of additional training or technology addresses, because the training and technology are processed through the same Einstellung-prone cognitive system.
Software Engineering
In software engineering, the Einstellung effect manifests most visibly in architectural decisions. Every experienced engineer has a repertoire of architectural patterns — microservices, event-driven architecture, CQRS, hexagonal architecture, monoliths with clear module boundaries, and so on. When faced with a new system to design, the engineer’s mind rapidly scans this repertoire and identifies the “right” pattern for the given requirements.
This pattern-matching is fast, confident, and wrong more often than the engineer realizes. The “right” pattern is typically the pattern the engineer has used most recently, most successfully, or most frequently — not the pattern that best fits the actual requirements. An engineer coming off a successful microservices project will see microservices everywhere. An engineer who just spent a painful year untangling a microservices architecture will see monoliths everywhere. The engineering equivalent of Luchins’ water jars is the architectural decision meeting where a senior engineer proposes an approach that is transparently shaped by their last three projects, defends it with arguments that sound technical but are actually autobiographical, and genuinely does not realize that this is what they are doing.
I have watched this happen dozens of times, and I have done it myself more times than I would like to admit. The experience of Einstellung from the inside is not “I am trapped in a mental rut and cannot see alternatives.” The experience is “This is obviously the right approach, and the fact that the junior engineer is suggesting something different just shows their lack of experience.” This is why the Einstellung effect is so dangerous: it does not feel like fixation. It feels like expertise.
Functional Fixedness: The Invisible Constraint
Karl Duncker’s candle problem, first published in 1945, is the canonical demonstration of functional fixedness, and it is worth examining in detail because its implications extend far beyond attaching candles to walls.
The setup: you are in a room with a candle, a box of thumbtacks, and a book of matches. Your task is to attach the candle to the wall so that it can burn without dripping wax on the floor. The solution is to empty the box of thumbtacks, tack the empty box to the wall, and place the candle on top of it, using the box as a shelf.
Most people fail to find this solution, and the reason is specific and instructive. They see the box as a container for thumbtacks. That is its function. It is a box, and it has thumbtacks in it, and therefore it is a thumbtack box. The possibility that it could be a shelf — that it could be separated from its current contents and used for a completely different purpose — does not occur to them. The box’s function is fixed.
Duncker demonstrated that the effect could be manipulated. When the thumbtacks were placed next to the box rather than inside it, significantly more people solved the problem. Removing the thumbtacks from the box weakened the functional association between “box” and “container,” making it easier to see the box as a potential shelf. The physical difference was trivial — same box, same thumbtacks, slightly different arrangement. The cognitive difference was enormous.
This tells us something important about the nature of functional fixedness: it is not a property of the object. The box does not become less shelf-like when you put thumbtacks in it. The functional fixedness is entirely in the perceiver’s mind. It is a cognitive overlay that maps functions onto objects based on context and experience, and it is so seamless that it feels like perception of the object itself rather than an interpretation imposed on the object.
Functional Fixedness Beyond Physical Objects
Duncker studied functional fixedness with physical objects, but the phenomenon extends to abstract domains in ways that are arguably more consequential.
Conceptual functional fixedness is when you perceive an idea, method, or framework as having a fixed function. A database is for storing and retrieving data (but it could be used as a message queue, a configuration store, a coordination mechanism, or a computation engine). A programming language is for writing software (but it could be used as a specification language, a documentation format, or a thinking tool). A meeting is for making decisions (but it could be used for relationship-building, creative exploration, or deliberate conflict generation).
Methodological functional fixedness is when you perceive a particular method as “the way” to solve a particular type of problem. You model complex phenomena with differential equations because that is what modeling looks like in your field — not because differential equations are necessarily the best tool for this particular phenomenon. You test software by writing unit tests because that is what testing looks like — not because unit tests are necessarily the right level of testing for this particular system. You evaluate strategy options by building spreadsheet models because that is what evaluation looks like — not because spreadsheet models capture the relevant dynamics.
Role-based functional fixedness is when you perceive people (including yourself) as having fixed functions within an organization or project. The designer designs. The engineer engineers. The manager manages. These role-based function assignments prevent you from seeing that the designer might have the key engineering insight, the engineer might have the critical design perspective, and the manager might need to get out of the way entirely. They also prevent you from seeing that you might need to step entirely outside your role to see the problem clearly.
Each of these forms of functional fixedness operates by the same mechanism as Duncker’s thumbtack box: the function is assigned automatically, below the level of conscious awareness, based on context and experience. And because the assignment feels like perception rather than interpretation, you do not question it. You do not think, “I am choosing to see this database as a data-storage system.” You just see a data-storage system. The alternative functions are not considered and rejected — they are never considered at all.
Design Fixation: When Creativity Is Constrained by Examples
In the 1990s, David Jansson and Steven Smith conducted a series of studies on design fixation that should be required reading for anyone who has ever participated in a brainstorming session.
They gave engineering students design problems along with “example solutions” that were explicitly described as flawed. The students were told that the examples contained specific problems and were instructed not to incorporate those flawed features into their own designs. The examples were presented purely as illustrations of the general problem domain, with explicit warnings about their shortcomings.
You can probably guess what happened. Students who saw the flawed examples produced designs that incorporated the flawed features significantly more often than students who were not shown any examples at all. Being told that a feature was a flaw did not prevent designers from incorporating it. Seeing the feature was enough to fix it in their minds as part of “how this type of thing looks.”
This is design fixation, and it is a specific instance of the Einstellung effect operating in creative work. The first solution you see constrains the space of solutions you can imagine. The example does not inform your thinking — it formats your thinking. It establishes the template, and subsequent “creative” work consists of variations on that template rather than departures from it.
Design fixation explains why brainstorming sessions that begin with someone presenting “a few ideas to get us started” are almost always less creative than sessions that begin with a blank whiteboard. The “starter ideas” are not catalysts for creativity. They are anchors that constrain it. Every idea generated after the starter ideas is, to a significant degree, a reaction to or variation on those initial ideas rather than an independent exploration of the solution space.
It also explains why exposure to existing solutions in a domain — reviewing the state of the art, studying competitors, looking at prior work — can actually reduce creative performance on novel design problems. This is counterintuitive and slightly alarming, because reviewing prior work is exactly what every responsible professional does before tackling a problem. The implication is not that you should ignore prior work; it is that you should be aware that looking at prior work has a cognitive cost as well as a cognitive benefit, and that the cost is largely invisible.
Anchoring in Cognitive Framing
We discussed anchoring in the previous chapter as a numerical phenomenon — the tendency for initial numbers to influence subsequent numerical judgments. But anchoring in cognitive framing is a broader and in some ways more consequential phenomenon.
When someone frames a problem for you — “We need to improve our customer retention rate,” “The system is too slow,” “The team is not communicating well” — that framing anchors your entire subsequent engagement with the problem. You accept the framing, and then you work within it. You think about how to improve retention, how to speed up the system, how to improve communication.
But the framing itself may be wrong. Perhaps the real issue is not retention but that you are acquiring the wrong customers. Perhaps the system is not too slow — the users’ expectations have been miscalibrated by a competitor’s demo. Perhaps the team is communicating fine — they are just communicating things that the person who framed the problem does not want to hear.
Cognitive reframing — stepping back from the given framing and asking whether the problem is the right problem — is one of the most valuable cognitive operations a person can perform. It is also one of the most difficult, precisely because of anchoring. The initial framing is not received as “one possible way of looking at this.” It is received as “what the problem is.” Questioning the framing feels like denying the problem, which feels socially and cognitively wrong. The framing becomes the water the fish swims in.
This is why consulting firms charge enormous amounts of money to “reframe the question.” It is genuinely hard to do, and the difficulty is not intellectual — it is cognitive. The client has been anchored on a particular framing, often for months or years, and their entire organizational cognitive infrastructure has been built around that framing. Escaping it requires an external force strong enough to overcome the anchoring effect, and that external force typically comes in the form of an outsider who is not anchored because they were not present when the framing was established.
Why “Think Outside the Box” Is Useless Advice
We are now in a position to explain, precisely, why the most common advice for overcoming cognitive fixation is almost entirely useless.
“Think outside the box” is an instruction that presupposes you can see the box. You cannot. The box is not a visible constraint that you are choosing to stay within. The box is the structure of your perception. It determines what you see, what solutions come to mind, what framings seem natural, and what alternatives feel worth considering. Telling someone to think outside the box is like telling someone to see the color they are color-blind to. The instruction makes perfect sense from the outside and is almost meaningless from the inside.
“Be more creative” has the same problem. Creativity, in the context we are discussing, is not a resource you can choose to deploy in greater quantities. It is a property of the cognitive paths your mind traverses. If all of your cognitive paths stay within the same territory — which, as we have seen, they will tend to do for neurological, metabolic, and experiential reasons — then “being more creative” just means traversing the same territory more energetically. You will produce more ideas, but they will be ideas of the same fundamental type. More creative brainstorming is like more thorough exploration of the same neighborhood: you might discover a few streets you missed, but you will not end up in a different city.
“Consider alternative perspectives” is closer to useful, but still falls short. You can try to imagine how someone else would see the problem, but your simulation of someone else’s perspective is generated by your brain, with all its biases, fixations, and blind spots. When a software engineer tries to think “like a user,” they think like an engineer’s model of a user — which is notoriously different from an actual user. When a manager tries to think “like an individual contributor,” they think like a manager’s model of an individual contributor. The simulation is constrained by the simulator.
“Challenge your assumptions” is perhaps the most common piece of advice and perhaps the most frustrating, because it asks you to do something that is definitionally impossible without external help. An assumption, in the relevant sense, is not a belief you are aware of holding. It is a structuring principle of your cognition that is invisible to you precisely because it is so fundamental. You cannot challenge your assumptions by introspection any more than you can see your own blind spot by staring harder. The whole point of an assumption, in this sense, is that it does not feel like an assumption. It feels like reality.
The Need for External Cognitive Perturbation
Everything in this chapter and the previous two chapters converges on a single point: you cannot think your way out of your own thinking patterns using your own thinking. The Einstellung effect means your experience actively constrains your solution space. Functional fixedness means you cannot see alternative uses for your existing conceptual tools. Design fixation means that exposure to existing solutions constrains your ability to generate novel ones. Anchoring means that initial framings dominate your subsequent reasoning. And all of these effects are powered by neurological and metabolic systems that are operating below the level of conscious control.
You need an external perturbation. Something that comes from outside your cognitive system and introduces genuinely unfamiliar elements — not just unfamiliar-within-your-framework, but unfamiliar in a way that your framework cannot assimilate without restructuring.
Historically, humans have found various sources of external cognitive perturbation. Some of them work quite well. All of them share a fundamental limitation. The next chapter examines these historical methods — what they got right, what they got wrong, and why they set the stage for something genuinely new.
But before we move on, let me leave you with a question that I hope will create a small, productive sense of unease: What are you stuck on right now? What problem have you been working on where the solution feels like it should be obvious but is not? What question have you been asking where the framing feels natural and correct?
Consider the possibility that the solution is not eluding you because the problem is hard. Consider the possibility that the problem is easy — and that you cannot see the easy solution because your mind has been set.
That setting is not something you can undo by trying harder. But it is something you can disrupt. The rest of this book is about how.