The Invisible GOTO
Edsger Dijkstra published “Go To Statement Considered Harmful” in 1968. Its core argument, stripped of the polemic that gave it the title, is this: control flow that jumps to an arbitrary label, computed dynamically, defeats the human reader’s ability to reason about program state. The reader has to consider, at every point in the program, the possibility that control just arrived from anywhere.
If you accept Dijkstra’s argument — and the working software industry, eventually, did — then you have to also accept the awkward corollary: throwing an exception is a goto. It is a goto to a label you cannot see, computed dynamically by walking up the call stack, possibly across many function boundaries, possibly out of code you did not write, into a handler you have never read. It is, by every metric Dijkstra cared about, worse than goto. A goto at least named its target.
We accepted this trade because we got something for it. We got separation of concerns: the place that detects the failure does not have to know how to handle it. The place that knows how to handle it does not have to thread error propagation through every intermediate function. The intermediate functions get to be written as if errors do not happen, with the actual handling pulled out to a higher level where it belongs. This is, genuinely, a good idea.
What we did not get — and what we mostly still do not have — is a clear-eyed view of what is happening at the moment the exception fires.
What actually happens
Let’s get concrete about a C++ throw, which is the canonical case (Java, C#, and Python all behave similarly enough that the differences don’t matter for this section).
void inner() {
Widget w;
do_something(); // (1) might throw
register_widget(w); // (2)
}
void outer() {
Logger log;
try {
inner(); // (3)
} catch (const Error& e) {
log.write(e.what()); // (4)
}
}
If do_something() at (1) throws, the runtime does the following, in roughly this order:
- Construct the exception object. The thrown value has to live somewhere outside the about-to-be-destroyed stack frames. Implementations vary; a common one (the Itanium ABI, used by GCC and Clang) heap-allocates a small structure with the exception object embedded.
- Begin stack unwinding. The runtime walks up the stack frame by frame. For each frame, it consults a table — generated at compile time, often called the
.eh_frameor.gcc_except_table— that says: “in this address range, here are the destructors to run, and here is whether there is a catch handler.” - Run destructors for in-scope automatic objects. In our example, when unwinding leaves
inner,w’s destructor runs. When unwinding entersouter’stryblock boundary,logis not destroyed (it’s still in scope at the catch point), but any temporaries inside thetryare destroyed. - Match the catch handler. The runtime checks each catch clause in textual order, against the exception’s static type. If a match is found, control transfers to the handler.
- Destroy the exception object after the handler returns (or rethrows, or terminates).
That table-driven walk is the “zero-cost” exception model: in the no-throw case, try blocks compile to nothing extra at runtime, because the address-range-to-handler mapping is in a side table consulted only when an exception fires. The cost is loaded into the throw path, which is dramatically slow — typically tens of microseconds per throw, vs. nanoseconds for a normal return. This is why the conventional wisdom “don’t use exceptions for control flow” exists. It is not aesthetic. It is that you bought the speed of the no-throw path by paying for it in the throw path, and if you throw a lot, you have made a poor trade.
(Older implementations used a different model — “setjmp/longjmp exceptions” — where every try block had a runtime cost but throw was cheaper. Almost nobody uses this anymore; it survives in some embedded toolchains. Visual C++ on x86 is a hybrid for historical reasons. On x86-64, everyone uses table-based.)
The takeaway is that, machine-mechanically, an exception is two things glued together: a non-local control transfer, and a sequence of destructor calls produced by the unwinder. The first part is the GOTO. The second part is what makes the GOTO survivable, and we’ll spend much of this book on it.
“Exceptional” is a misnomer
Bjarne Stroustrup, who invented C++ exceptions, has written repeatedly that they are intended for “exceptional” conditions: things outside the normal flow of the program. This is true as a design intent. As a description of how exceptions are used, it is not.
Allocator failure (std::bad_alloc) is normal in any process that allocates. Parsing input that turns out to be malformed is normal in any program that consumes user input. End-of-file is normal in any program that reads files. Each of these is regularly modeled with exceptions. They are not exceptional; they are expected. The word “exceptional” has caused real harm here, because programmers, hearing it, conclude that they don’t need to think about exception paths very carefully — they’re rare, after all.
A more honest framing: an exception is a non-local error return. It is the error case of a function, communicated to the caller’s caller’s caller without each intermediate caller having to participate. This is, again, a good idea. But it is not rare. In a typical C++ codebase, the set of functions that can throw is approximately all of them, because almost any function can call something that calls something that allocates, and std::bad_alloc is always available.
C++ tried to constrain this with throw() exception specifications. This was, by widespread consensus, a failure — they were checked at runtime rather than statically, the runtime check called std::terminate on violation, and the result was that nobody used them. C++11 deprecated them and introduced noexcept, which is a different thing: noexcept is a property the compiler reasons about for optimization (move operations, container exception safety), and violating it calls std::terminate rather than letting the exception propagate. noexcept is useful where it is true. It does not make the rest of the language any more predictable.
Java tried a different constraint with checked exceptions: the type system tracks which exceptions a method can throw, and callers must either catch them or declare them. This was more rigorous than C++ and arguably worse in practice — programmers responded to the inconvenience by wrapping checked exceptions in unchecked ones, or by declaring throws Exception everywhere, defeating the type system entirely. We’ll come back to this in chapter 5.
The point is that every serious attempt to bound which functions can throw has, in practice, foundered on the fact that very few functions actually qualify. Throwing is the rule, not the exception, and “exceptional” was always the wrong word for it.
What “the call stack” means at the moment of throw
Here is something almost no one teaches and almost everyone needs:
When do_something() throws, the call stack contains every function above it. Every one of those functions had local invariants that may not have been finished. Every one of those functions had partially-constructed data structures, half-modified state, locks held, files opened, network sockets in indeterminate states. The unwinder is going to walk through all of them, in reverse order, running destructors. This is the only mechanism the language provides to clean up after them.
If a function modified two related data structures and only got through one before the throw, the unwinder cannot help. It does not know about the relationship. It runs destructors. That is the entirety of its job.
So when you write code that lives in the middle of a call stack — code you expect other code to call — you are, every time, asking yourself a question whether you realize it or not: what does the world look like if I throw partway through? The destructors of my locals will run. Anything I new’d but haven’t yet handed to a smart pointer will leak. Anything I mutated through a non-owning pointer will stay mutated. Anything I committed to a file or socket has been observed.
The body of this book is what to do about that question. But the question itself is the precondition for everything that follows. If you don’t see the question — if your mental model of “an exception” is a magical thing that takes you to your handler — you will write code that, when it throws, leaves the world in a state nobody can describe.
A small concrete example
Here is a fragment that I have seen, in some shape, in approximately every codebase I have ever worked in:
void Account::transfer_to(Account& other, int amount) {
balance_ -= amount;
log_transfer(amount, &other); // might throw
other.balance_ += amount;
}
What is the state of the world if log_transfer throws?
balance_ has been decremented. other.balance_ has not been incremented. Money has vanished. The destructors of *this and other will run when those objects eventually go out of scope, and they will obediently destroy themselves with the wrong balances.
Notice what is not wrong: there is no leak, no use-after-free, no undefined behavior. The program is, by every check the language can apply, well-formed. The bug is that the invariant the programmer cared about — that money is conserved — was violated by a partial update across a possibly-throwing call. The compiler had no way to know about the invariant. The unwinder has no way to know about it. The exception fires, the locals go away, the program eventually crashes or doesn’t, and the books don’t balance.
This is the entirety of exception safety, in miniature. The next chapter gives you a vocabulary for talking about it.
Further reading
- Bjarne Stroustrup, The Design and Evolution of C++ (1994), §16, on the history of exception specifications.
- Itanium C++ ABI, “Exception Handling” — the actual specification of how the unwinder works, if you have a strong stomach: https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
- Edsger Dijkstra, “Go To Statement Considered Harmful,” Communications of the ACM, March 1968. Read it. It is two pages and it is right.