Tooling
Static analyzers for exception safety, as a general property, mostly do not exist. This is a stronger statement than I want to make, so let me qualify it: there are tools that catch some exception-safety bugs, in some languages, by looking for specific patterns. There is no tool, in any language I know of, that takes a function and tells you which of the three guarantees it provides. The general problem is approximately as hard as program verification, and we have not solved program verification.
This chapter is an honest tour of what tooling exists, what it catches, and where the gaps are.
What “static analysis for exception safety” would have to do
Imagine the ideal tool. You give it a function and ask “what guarantee does this provide?” To answer, the tool must:
- Identify every operation in the function that might throw.
- For each potential throw point, identify the state of every variable, every member of
*this, every observable side effect that has been performed up to that point. - Determine whether that state is consistent with the function’s preconditions — i.e., whether the function’s invariants hold.
- Determine whether any resources have been acquired but not yet released, accounting for RAII.
- Aggregate these results across all potential throw points to determine the weakest guarantee provided.
Each of these steps is a hard problem.
(1) requires knowing the throw set of every called function. This is transitive: a function that calls a function that calls a function that throws can throw. C++’s lack of a meaningful “throws what” annotation at the type-system level means this analysis must be inferred from source, which means it depends on having source for every called function, which is often not true at library boundaries.
(2) requires whole-function dataflow with abstract interpretation. Doable for small functions; hard for big ones, especially with branching, loops, and pointer aliasing.
(3) requires a specification of what the function’s invariants are. The compiler does not know what invariants you intend. This is the part that demands either annotations from the programmer or an after-the-fact specification language.
(4) is mostly tractable for languages with deterministic destruction and an “owns” relationship encoded in types. RAII makes this almost easy in C++. In garbage-collected languages it’s harder, because resource ownership is not visible in types.
(5) requires combining all the above into a guarantee classification, which is at least the join of a lattice of state-shapes, which is, well, expensive.
This is what would be required. Unsurprisingly, no tool does all of this. Most do parts of (1) and (4); a few do parts of (2); essentially none address (3) or (5) in generality.
What actually exists, in C++
noexcept and friends
The most useful “tool” in C++ for exception safety is the noexcept specifier itself, used as a static check via noexcept(expr):
template<class T>
void Container<T>::move_or_copy(T& dst, T& src) {
if constexpr (std::is_nothrow_move_constructible_v<T>) {
dst = std::move(src);
} else {
dst = src;
}
}
This is the standard library’s pattern: branch on whether move is noexcept to choose between move (faster, basic guarantee under throw) and copy (slower, strong guarantee under throw). It’s a tool in the sense that the type system enforces correctness here — if T’s move constructor lies and throws despite being marked noexcept, std::terminate is called.
It is not a tool for checking exception safety; it is a tool for expressing a guarantee in code that other code can branch on. The check is local and shallow.
clang-tidy and cppcheck
clang-tidy has a handful of checks relevant to exception safety:
bugprone-throw-keyword-missing— flagsException(...)constructed withoutthrow.bugprone-exception-escape— flags functions markednoexcept(or destructors, which are implicitlynoexcept) that may, transitively, throw.cppcoreguidelines-avoid-goto— irrelevant, but I include it because the chapter is about GOTOs.cert-err58-cpp— flags non-throwing destructors that throw. (Subset of the bugprone-exception-escape check.)cert-err54-cpp— flags handlers ordered such that base classes catch derived exceptions. (Trivial.)
cppcheck has similar coverage. None of these tools tell you whether a function provides the basic or strong guarantee. They catch the stupid cases — destructors that throw, noexcept functions that throw — and that is the limit of their ambition.
clang-static-analyzer and the path-sensitive cousins
The path-sensitive analyzers (Clang Static Analyzer, Coverity, PVS-Studio) can sometimes find more. Coverity has a “RESOURCE_LEAK” checker that catches some leaked resources on exception paths. PVS-Studio has a checker for “object is not destroyed before throwing.” These are real and useful, and they catch a class of bugs that the simpler clang-tidy checks miss.
But: they are all looking for leaks of registered resources, not for invariant violations. The Account::transfer_to bug from chapter 1 — where money disappears because the throw happens between two writes — would not be caught by any of these. There is no leak; there is no resource acquired and not released. The bug is invariant-shaped, and the tool has no specification of the invariant.
Concurrency-specific tools
ThreadSanitizer (TSan) catches data races, which is a different class of bug but interacts with exception safety. ThreadSanitizer will tell you that two threads accessed the same memory without synchronization, including when one of those accesses was on an exception path. This is genuinely useful: a common exception-safety bug in concurrent code is “I released the lock before mutating something,” which ThreadSanitizer can catch as a race.
AddressSanitizer (ASan) catches use-after-free, which is the dominant manifestation of “destructor of object X freed memory while object Y was still using it.” Exception-safety bugs that involve dangling pointers after partial cleanup show up here.
Neither of these is exception-safety-specific, but they catch downstream effects of exception-safety bugs, which is half a loaf and still useful.
What exists in other languages
Java
SpotBugs (formerly FindBugs) has checks for RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (ignoring the return value of a method that might fail) and OBL_UNSATISFIED_OBLIGATION (a Closeable resource not closed on all paths). The latter is a genuine exception-safety check for resource leaks. It is also widely-deployed: most large Java code bases have some exposure to it, either via SpotBugs directly or via tools that wrap it (SonarQube, etc.).
ErrorProne (Google’s checker) has checks for MustBeClosedChecker and similar. The Java ecosystem’s ability to express ownership at the type level is limited, so these checks rely on annotations and convention rather than on type-level enforcement.
Rust
The Rust compiler enforces exception safety, in a narrow sense, automatically. The borrow checker ensures that if a function panics, any references it took are still valid (because they’re either copied or moved-out, and the move is irreversible). Drop runs deterministically, so resources are cleaned up.
What the compiler does not enforce is that invariants are restored after a panic. If you have a struct with two fields that must be kept in sync, and you panic between updating one and updating the other, the compiler does not notice. The discipline of “don’t leave the struct invalid” is on the programmer.
The mutex-poisoning mechanism is the closest Rust gets to a runtime check: it forces callers to acknowledge the possibility of inconsistent state.
cargo-audit, clippy, and various lints catch simpler cases. None of them check for the strong guarantee directly.
Go
golangci-lint’s default checks include errcheck (you ignored an error return) and various nilness analyses. Go’s language-level argument is that exception safety is replaced by “errors are values” and explicit err != nil checks, so the type system is supposed to catch propagation issues. The errcheck lint enforces that you actually check.
What this misses, of course, is the structural problem from chapter 1: even if you check every error, the partial-mutation problem is the same. Go’s tooling does not catch this. The community’s response, when prompted, is usually that the Go style guide implies a discipline that handles this — return early on errors, don’t mutate before validating — which is true in well-written Go and not enforced anywhere.
Solidity
The smart-contract space is the place where exception-safety-equivalent tooling is most developed:
- Slither (Trail of Bits): static analyzer that catches reentrancy patterns, uninitialized state, dangerous external calls. Default-on in many CI pipelines.
- MythX: SAAS product, multiple analyzers, including symbolic execution.
- Securify (ETH Zurich): automated formal verification for a subset of properties.
- Manticore: dynamic symbolic execution.
- Echidna: property-based fuzzing for smart contracts.
The Solidity world’s tooling is — and I want to be clear about this — better than the C++ world’s tooling for the equivalent problem. The reason is incentive. A reentrancy bug in Solidity costs eight figures, payable in cryptocurrency; an exception-safety bug in C++ costs a production incident. The market has paid for tooling proportional to the cost.
If you take only one thing from this chapter, take this: the smart-contract world has shown that tooling for the structural pattern is possible to build. The C++ world’s tooling is bad not because the problem is fundamentally harder, but because we did not pay for better.
Property-based testing as a partial substitute
If static analysis can’t tell you which guarantee a function provides, dynamic checking can sometimes substitute. The technique:
- Wrap throwing operations to probabilistically throw at every call site. This converts the question “what does the function do under throw?” to a runtime check.
- Run the function under such conditions, observe the resulting state, and check the invariants.
This is approximately what Boost.Test’s BOOST_CHECK_NO_THROW, etc., are gesturing at, but the more interesting application is property-based testing: define a precondition, run the operation with throws injected, check the postcondition.
// pseudo-code
property("transfer_to is exception-safe", [](Account a, Account b, int amt) {
Account a_orig = a, b_orig = b;
int total_orig = a.balance() + b.balance();
try {
a.transfer_to(b, amt); // some calls are randomly throwing
} catch (...) {}
int total_now = a.balance() + b.balance();
return total_now == total_orig; // money conserved
});
If the test ever fails, you have an exception-safety bug. The test will not find every bug — the throw-injection is probabilistic, and some bugs require very specific throw locations — but in practice, run for a few thousand iterations, it finds a lot.
Hypothesis (Python), QuickCheck (Haskell), proptest (Rust), Hedgehog (multiple languages), rapidcheck (C++) all support this style. The discipline of writing the property — of explicitly stating the invariant — is itself worthwhile, even if the tool catches nothing. The act of writing “money is conserved” forces you to think about what your invariants are, which is exactly what the function’s exception safety hinges on.
Fuzzing approaches
Fuzzing is property-based testing’s blunter cousin: throw random inputs at the function and watch for crashes. Modern fuzzers (libFuzzer, AFL++, honggfuzz) are coverage-guided, which makes them effective at exploring code paths that simple random testing would miss.
For exception safety specifically, coverage-guided fuzzing combined with throw injection is one of the most effective tools available. The fuzzer explores reachable code; the throw injection converts each reached call into a potential throw site; assertions in the test code check the invariants.
Google’s libFuzzer + ASan + TSan + UBSan stack catches a lot of exception-safety-adjacent bugs, even though none of the components is specifically about exception safety. The bugs manifest as use-after-free (cleanup partially ran) or as race conditions (locks released early) or as failed assertions (invariants violated). The fuzzer doesn’t know it’s looking for exception-safety bugs; it just finds them.
This is, in the absence of better static analysis, the most practical tool a working engineer can apply: fuzz with sanitizers, and make sure your test suite has assertions about invariants. The combination catches things human review misses.
Why the tooling is bad
A few reasons, in roughly increasing order of fundamental:
-
The market has not paid for it. As above. The cost of an exception-safety bug is diffuse and hard to attribute. The cost of a reentrancy bug in a smart contract is concentrated and easy to attribute. The market has paid for tooling proportional to the attributable cost.
-
The specification problem. Static analysis for exception safety needs invariant specifications, and we do not have a standard way to write them in mainstream languages. Languages with explicit specification (Eiffel, SPARK, Dafny) can do better; languages without (everything you actually use) cannot, beyond the syntactic check.
-
The whole-program problem. Exception safety is a transitive property, propagating up call stacks across module boundaries. Effective analysis requires cross-module knowledge, which compilers and linkers do not, by default, share.
-
The genuine difficulty of the analysis. Even given specifications and whole-program access, the analysis is exponential in the number of throw sites. Approximations exist; they have false positives, which programmers reject; or they have false negatives, which makes them useless.
The honest summary is that for the foreseeable future, exception safety is mostly maintained by discipline and code review, with tooling helping at the margins. This is unsatisfying. It is also true.
What to actually use, in practice
If you have a budget of one tool to deploy in your codebase, deploy a sanitizer-augmented fuzzer. Property-based tests are second. Static analyzers (clang-tidy, ErrorProne, etc.) are the easy default that catches a lot of low-hanging fruit. Code review with a checklist (chapter 12) is the universal floor.
If you are working in Solidity, deploy Slither. Run it in CI. The cost is trivial; the bugs it catches are not.
If you are working in Rust, you are living in the language with the strongest static guarantees on exception-safety-adjacent issues already. Lean on the borrow checker, take mutex poisoning seriously, and write code that handles Result deliberately.
If you are working in Common Lisp, your compiler is unlikely to help you, but your runtime debugger is the best in the industry. Use it.
The next chapter walks through real bugs that real engineers shipped, and what each of them came down to.
Further reading
- The art of clang-tidy: clang-tidy’s check list is at https://clang.llvm.org/extra/clang-tidy/checks/list.html. Read the bugprone-* and cert-* sections.
- John Regehr, “A Guide to Undefined Behavior in C and C++,” Embedded.com 2010. Tangential, but the same epistemic situation: tools catch some, miss most.
- AFL++ documentation, on integration with sanitizers. https://aflplus.plus/
- “Hypothesis: Test faster, fix more,” David MacIver. The
Hypothesisdocumentation has a clearer explanation of property-based testing than any other source I know. - Trail of Bits, “Building secure smart contracts,” https://github.com/crytic/building-secure-contracts. Especially the Slither rule descriptions.