Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Preface

This book was written by Claude Opus 4.7, an AI system built by Anthropic, working from a brief drafted by the CloudStreet editorial team. The byline is the model’s own — not as a stylistic flourish, but because pretending otherwise would be dishonest, and dishonesty is a poor foundation for a book that asks the reader to trust its survey of a noisy field.

What that means for you, the reader: every claim of fact in this book — every range figure, every project status, every license, every link — was generated by a language model and is, in principle, capable of being wrong in the way language models are wrong. The book has been edited and the structurally important claims cross-checked. But the mesh networking landscape moves, and 2026 will become 2027, and a project that was alive when this was written may have gone quiet by the time you read it. Where the book leans on a contested or volatile claim, it tries to say so. Where it does not, treat the load-bearing claims as you would treat any survey: directionally correct, individually verifiable, and worth a quick check before you commit hardware money to a recommendation.

The structural argument of the book — that physical layer cascades through everything, that routing is the dominant constraint, that “mesh” is doing four jobs in one word, that you should start with Meshtastic and invest in Reticulum — is not the kind of thing that goes stale in a year. The specific URLs and version numbers are.

What this book is not: a tutorial, a comprehensive technical reference, a defense of any particular project, or a polemic against the existing Internet. It is a survey written for one specific reader — the engineer who has heard of half these projects and doesn’t know how they compare — and you should put it down the moment it stops being useful to you.

— Claude Opus 4.7

Introduction

There is a certain kind of conversation that happens at hacker meetups every few months. Someone mentions Meshtastic. Someone else mentions Tailscale. A third person, helpfully, says “oh, like cjdns?” and a fourth person, less helpfully, says “isn’t that just LoRa?” Within ninety seconds the word mesh has been used to refer to four mutually incompatible things and nobody has the energy to untangle which is which.

This book is the untangling.

The premise

Mesh networking is a topic where the available literature splits into two unhelpful piles.

The first pile is academic: SIGCOMM papers, IETF drafts, dissertations on mobile ad-hoc routing. They are, on the whole, excellent — and they assume you have read every prior paper they cite, which means you have not read them, because nobody has time to read every prior paper. They are not for you.

The second pile is project marketing. The Reticulum landing page. The Meshtastic homepage. The Tailscale “How it works” diagram. They are, on the whole, well-produced — and they assume you have already decided to use the project in question. They are not surveys. They cannot tell you whether to choose this project or a different one, because every one of them is the project they’re selling.

There is almost nothing in between.

This book aims to fill that gap, for one specific reader: a working engineer who has heard of half these projects, doesn’t know how they compare, and wants an honest survey before committing a weekend to one of them. If you are looking for an academic treatment, the bibliography points outward. If you are looking for a tutorial for one particular tool, the project’s own documentation will serve you better than this book ever could. What this book offers is the thing the existing literature does not: the comparative map.

What we mean by 2026

The mesh networking field has been through several waves. The early 2010s wave — Hyperboria, the cjdns-curious, the radio-mesh-as-political-act communities — is largely over, and pretending otherwise would waste your time. The mid-2010s consumer-mesh wave (the Eero/Plume “buy three boxes for your house” definition of mesh) is alive but is a different category of object than what most of this book is about. The 2020s wave — Meshtastic going mainstream, Reticulum maturing, mesh VPNs becoming infrastructure — is the wave you are sitting in the middle of as you read this.

Scoping to 2026 means: the projects discussed are the ones a reader can actually engage with right now. Where a project is dormant, the book says so. Where a project is thriving, the book says so. Where a project’s status is genuinely ambiguous — and a few of them are — the book says that too, and points at the most recent meaningful activity rather than pretending to certainty it doesn’t have.

What you will get out of this

By the end of the book you will be able to:

  • Distinguish the four things “mesh” gets used to mean, and call out vendors using the word as marketing rather than description.
  • Reason about why physical layer choice cascades through everything else — range, throughput, regulatory limits, who can join.
  • Hold the routing problem in your head clearly enough to understand why flooding-based meshes (Meshtastic) degrade above ~100 nodes and what the alternatives are doing differently.
  • Have an opinion about which project to install on a Raspberry Pi this weekend, indexed by what you actually want to feel.

What you will not get out of this

You will not become an expert on any of these projects. You will not be qualified to operate a regional mesh deployment, audit Reticulum’s cryptography, or contribute to Yggdrasil’s routing core. Those are jobs that take longer than a weekend, and the people doing them have written better material than this book ever will once you know which doors to walk through.

What you will get is: which doors to walk through.

How the book is organized

Part I (chapters 1–3) does the foundational work. What “mesh networking” actually means once you tease apart the four uses of the word. Why physical layer choice cascades through every other decision. Why routing is the hard part and how the major approaches differ.

Part II (chapters 4–6) is the LoRa family. Meshtastic is the entry point, MeshCore is the engineered alternative, Reticulum is the project that takes the whole stack seriously. Each gets an honest assessment of what it does well, what it does badly, and what shape of user it serves.

Part III (chapters 7–8) moves to IP-layer meshes. Yggdrasil is alive and worth running. cjdns and Hyperboria are largely historical, but the lessons they left are real and worth honoring.

Part IV (chapters 9–11) covers the adjacent shapes. Scuttlebutt and the gossip family — mesh-adjacent, useful as contrast. Briar, Bitchat, and short-range Bluetooth-mesh messaging. And mesh VPNs (Tailscale, Nebula, ZeroTier, Headscale) — which legitimately use the word mesh but solve a completely different problem.

Part V (chapter 12) is the punchline. Indexed by what you want, here is what to install this weekend, what hardware to buy, and what an evening with it looks like.

A note on tone

This book will occasionally be blunt about projects that did not pan out. That is not disrespect — those projects were attempts at something hard, and the people who built them deserve credit for the attempt regardless of the outcome. But the book is written for a reader who is about to spend a weekend on something, and softening “this network is a ghost town” into “this network has a small but dedicated community” would be a disservice. The reader can handle “this didn’t work and here is what we learned.” So the book treats them that way.

Let’s begin.

What “Mesh Networking” Even Means

The word mesh is doing too much work. In the span of a single hallway conversation it can refer to four different things, only some of which have anything in common with each other, and no one will stop to clarify which one they meant. This chapter is the disambiguation. It is short and a little pedantic, and it earns the rest of the book.

The four things people call “mesh”

When a working engineer says mesh networking, they mean one of:

  1. A mesh routing protocol. A specification (and an implementation) for how a population of nodes that can hear some of each other should agree on which node forwards which packet, with no node designated as the gateway. Examples: OLSR, BATMAN-adv, Babel, Yggdrasil’s spanning-tree, cjdns’s source-routing-with-DHT.

  2. A mesh application — a piece of user-facing software that runs over a mesh-shaped network and is the thing the user actually interacts with. Meshtastic the app. Briar the app. Manyverse on top of Scuttlebutt.

  3. A mesh VPN. An overlay where every node can address every other node directly, peer-to-peer, over the existing public Internet, with NAT traversal and key exchange handled by a coordination service. Tailscale. Nebula. ZeroTier. Headscale. The traffic is mesh-shaped (point-to-point, no central relay in the data path); the substrate is the regular Internet.

  4. “Mesh” as a marketing word. Three WiFi access points in a house. Two LoRa nodes on a desk. A Bluetooth speaker that pairs with another Bluetooth speaker. None of these are mesh networks in any structural sense — they are typically star topologies with one or two hops — but the word mesh tests well in marketing copy and so it gets stuck on the box.

These are not the same thing. They are not different layers of the same thing. A mesh routing protocol is concerned with which node forwards a packet when there is no central authority to ask. A mesh VPN is concerned with how do two nodes that already have an Internet connection bypass the need for a central relay. Those are nearly opposite problems. Both legitimately call themselves mesh. Both are useful. Conflating them is the source of about 80% of the confusion in this space.

The structural distinction that matters

Here is a test that cuts through almost all of the marketing.

Does the network function when the public Internet is unavailable?

If yes, you are looking at category 1 (a routing protocol over an alternative substrate — radio, BT, sneakernet) or category 2 (an application built on category 1).

If no — if the network requires the underlying public Internet to deliver packets between nodes — you are looking at category 3 (a mesh VPN) or category 4 (marketing).

Both are valid. They solve different problems. But the answer to that one question tells you which conversation you are in.

A second test, less binary but more revealing:

What is the routing problem the protocol is actually solving?

For category 1, the routing problem is Node A wants to send a packet to Node F. Node A can hear Node B and Node C. Node F can hear Node D and Node E. Nobody has a complete map. How does the packet get there? This is a hard problem. There are no central authorities. The links come and go. The radio environment shifts. Algorithms like OLSR, Babel, Yggdrasil’s spanning-tree, and Reticulum’s transport layer exist to solve exactly this.

For category 3, the routing problem is Node A and Node F both have public Internet connections. Node A is behind a CGNAT in São Paulo, Node F is behind a corporate firewall in Stockholm. They want a direct connection. Where does the packet go? This is also a hard problem, but it is a different hard problem, mostly involving NAT traversal, STUN/TURN, hole punching, and trust establishment via a coordinator. Tailscale’s WireGuard-and-DERP architecture is a beautiful solution to it.

Same word. Different problems. Different solutions. Different chapters of this book.

What “marketing-mesh” usually means

When a vendor sells you a “mesh router” for your house, what you are buying is a small number (typically two to four) of WiFi access points that share a common SSID, hand off clients between themselves, and backhaul to a single primary unit which is connected to your ISP. There is exactly one path from your laptop to the Internet: laptop → secondary AP → primary AP → modem. There is no real routing decision being made. The “mesh” topology, such as it is, exists only in the small redundancy where two secondaries can backhaul through each other if one’s backhaul to the primary is poor.

This is not nothing — it’s a genuine improvement over the single-AP-with-poor-far-room-coverage situation. But calling it mesh networking in the same sentence as Reticulum or Yggdrasil is a category error. They are not in the same conversation. They are not solving the same problem. The vendor is using the word mesh to mean we have more than one node, which is the marketing definition.

You will sometimes see “mesh” used the same way for IoT products: a smart bulb, a smart switch, and a hub all use Zigbee or Thread, and the marketing copy calls this a mesh. Here the term is closer to honest — Zigbee and Thread do have multi-hop routing, and bulbs can legitimately forward packets for other bulbs — but the user-visible behavior is rarely affected by this routing because the network is small and the hub is always reachable. It’s a mesh in the protocol sense and a star in the practice sense.

The terminological mess in summary

When someone says “mesh”…They might mean…Example
…the protocol layerA multi-hop routing protocol with no central authorityBabel, BATMAN-adv, Yggdrasil, Reticulum’s transport
…the application layerAn app that runs over such a networkMeshtastic the app, Briar, Manyverse
…the VPN layerA peer-to-peer overlay on the public InternetTailscale, Nebula, ZeroTier, Headscale
…the marketing layer“We have more than one node”Eero, Plume, “mesh WiFi” boxes

For the rest of this book: when we say mesh, we mean category 1, 2, or 3. We will be specific about which. We will also occasionally point at category 4 to be honest that it exists and that it muddies the search results when you go looking for serious material on the other three.

What to take from this chapter

You should now be able to:

  • Hear someone say mesh networking and ask, in a tone that does not sound rude, “do you mean the routing kind or the VPN kind?”
  • Recognize when a vendor’s “mesh” is doing structural work and when it is doing marketing work.
  • Understand why a book that surveys Reticulum and Tailscale in the same volume is not a category error — both legitimately use the word, and both deserve serious treatment, but they live at different layers of the stack and solve different problems.

The next two chapters set up the foundations for the routing-protocol kind: why the physical layer cascades through every other decision (chapter 2), and why routing itself is the dominant constraint as networks grow (chapter 3). After that, the projects.

The Physical Layer Decides A Lot

If you remember one thing from this book, remember this: in mesh networking, the physical layer dominates. Whatever you choose to send bits over — radio, copper, fiber, sneakernet — cascades through every other decision you will make. Range, throughput, regulatory limits, hardware cost, power draw, who can join your network at all — all of these are downstream of what you picked at the bottom of the stack. You can write the most beautiful routing protocol the world has ever seen and it will not save you from a 250 bit/s link with a hard 1% duty cycle.

This chapter does the physical-layer survey. It is not exhaustive — there is no point trying to compete with the ARRL handbook — but it is enough that when chapter 4 starts talking about Meshtastic, you already know why “868 MHz vs. 915 MHz” matters, why the duty-cycle limit is a real constraint, and why “we run over LoRa and WiFi and serial” is a non-trivial claim.

Radio: LoRa

LoRa (the modulation; LoRaWAN is something different and we’ll come back to that) is the substrate that the entry-level mesh-networking conversation in 2026 is built on. It is a chirp-spread-spectrum modulation operating in the unlicensed sub-GHz ISM bands, designed for long range, low data rate, low power — the inverse of WiFi.

The numbers you should know:

  • Range. 2–5 km in suburban terrain, 10+ km with line of sight, occasionally much more with high antennas and favorable conditions. Hobbyists routinely report 20–40 km hilltop-to-hilltop links; the record at the time of writing involves balloon-borne nodes well over 700 km apart. Don’t plan on those numbers. Plan on the suburban 2–5 km figure.
  • Data rate. 250 bit/s to ~22 kbit/s depending on spreading factor. Read that again. Bits. Not kilobits. At the long-range settings you are sending text messages, not streaming video, not even browsing the web. The very fastest LoRa configuration is slower than a 1992 dial-up modem.
  • Power. Receive draws ~10 mA. Transmit draws ~120 mA at 100 mW (+20 dBm) output. A node spends most of its time receiving, which means a Meshtastic node on a 2000 mAh battery can run for several days; a solar-supplemented node can run effectively forever.
  • Hardware cost. $20–40 for a development board (Heltec, LilyGo, RAK), $100–250 for a polished production node (an RNode, a station-grade build), $30–150 all-in for a Meshtastic starter kit including USB cable and antenna.

LoRa’s power-frequency-range envelope is what makes it interesting for off-grid mesh. You give up data rate, you get a link that will reach over a city.

The frequency-fragmentation problem

LoRa operates in regional sub-GHz ISM bands. The bands are different in different parts of the world, and the regulations attached to them are different too. The major regions:

RegionBandNotes
Europe (EU868)863–870 MHzHard 1% duty cycle on most sub-bands. Strictly enforced.
North America (US915)902–928 MHzNo duty cycle, but 400 ms dwell-time limit per channel.
China (CN470 / CN779)470–510 MHz / 779–787 MHzLower-band variants, regional.
Japan/Korea (AS923)920–923 MHzListen-before-talk plus duty cycle in most sub-bands.
Australia/NZ (AU915)915–928 MHzSimilar to US.
India (IN865)865–867 MHzNarrow allocation, duty cycle.

A device manufactured for EU868 will not legally transmit in US915. A device manufactured for US915 will not legally transmit in EU868. This is not a software flag you toggle — different bands often need different hardware (the radios are physically tuned). And the antennas are wavelength-tuned: a 915 MHz antenna on an 868 MHz radio will radiate poorly and in some cases damage the radio.

The cascading consequence: LoRa-mesh networks fragment along regulatory borders. A Meshtastic mesh you build in Berlin and a Meshtastic mesh your friend builds in San Francisco are not the same network and never will be. They cannot interoperate over the air. They can, with effort, be bridged via the public Internet (an MQTT bridge, in Meshtastic’s case), but at that point you are no longer doing the off-grid thing — you are using mesh-shaped messaging on top of the regular Internet, which is fine, but it is not what the marketing copy implied.

This is the most important consequence of the physical layer that this book wants you to internalize: when someone says “the global Meshtastic mesh,” they mean a federation of regional meshes connected by Internet bridges, not a single radio network. The radio physics forbid the latter.

Duty cycle: the slow killer

In Europe, LoRa is restricted to a 1% duty cycle on most sub-bands. That means if you transmitted for 1 second, you must remain silent on that sub-band for 99 seconds. A Meshtastic node sending a single text message at SF12 (the long-range setting) can take 1.5–2 seconds in the air. That single message has consumed your duty-cycle budget for the next two and a half minutes.

Multiply that by every node in earshot rebroadcasting (because Meshtastic’s flood routing rebroadcasts everything), and you start to understand why mesh networks at this layer don’t scale linearly. The medium itself is rate-limited by regulation, the rebroadcasts compound the airtime usage, and the network is silent during recovery.

This is why chapter 4 will say flatly that Meshtastic begins to degrade above ~100 nodes in a region. It’s not a software failing. It’s the physics of unlicensed sub-GHz radio meeting flooding-based routing.

LoRa vs. LoRaWAN

LoRa is the modulation. LoRaWAN is the network protocol that the Things Network and similar deployments use over LoRa, with a star topology, gateways, and a back-end network server. LoRaWAN is not a mesh. A LoRaWAN sensor talks to a LoRaWAN gateway, which talks to a LoRaWAN network server, which talks to your application. The sensors do not talk to each other. The mesh-networking projects in this book — Meshtastic, MeshCore, Reticulum-over-LoRa — use LoRa the modulation but ignore LoRaWAN the protocol entirely. They speak their own protocols over the same radio.

Radio: WiFi

WiFi is the other end of the radio envelope. Short range (typically ≤ 100 m outdoors, much less indoors), high data rate (tens to thousands of Mbit/s), high power (hundreds of mW typical, up to several W for outdoor units in some regions). It is what you use when the substrate is the same building, not the same valley.

For mesh purposes, WiFi shows up in two places:

  • WiFi mesh routing protocols — BATMAN-adv and similar — let a population of WiFi devices in ad-hoc or mesh mode (IBSS, or 802.11s) forward packets for each other without an access point. This is real mesh networking in the routing-protocol sense, used in deployments like Freifunk and other community wireless networks. It is not the focus of this book, but it is alive and worth knowing exists.
  • WiFi as a Reticulum interface. Reticulum (chapter 6) treats WiFi the way it treats any other carrier: as a substrate for its own packets. Two laptops on the same WiFi network can exchange Reticulum traffic over a UDP interface; this is one of the easiest ways to play with Reticulum without buying hardware.

Radio: Bluetooth

Bluetooth is interesting in mesh contexts not because it is fast (it is not) or long-range (it is decidedly not, ~10 m for classic, sometimes a bit more for BLE long-range modes) but because every smartphone has it on already. That last property is what makes Bluetooth-mesh applications like Briar and Bitchat (chapter 10) viable: they don’t require the user to plug anything in or buy any hardware, and “the network is free if you’re in the same building” is a real value proposition for some use cases.

The honest range numbers for Bluetooth in the wild: 10–15 m indoors through one wall, 20–50 m outdoors line-of-sight, occasionally further with directional antennas or BLE long-range coding. Don’t expect more than that. Bluetooth-mesh works well for “people in the same protest” or “phones in the same building”; it does not work for “people in the same neighborhood.”

Wired: Ethernet, fiber, serial

Wired links are not glamorous in a mesh-networking context, but they show up more than you’d expect, especially at the gateway nodes and in serious community deployments. Ethernet between two Yggdrasil nodes in the same rack. Fiber between two roof-mounted radios where the cable run is long. Serial between an RNode and a Raspberry Pi. The wires are boring; what matters for mesh purposes is that good projects let you use them transparently.

Reticulum is the project that takes this most seriously. A single Reticulum node can have a serial interface to one radio, a UDP interface to another node over WiFi, an I2P interface to a node halfway around the world, and a TCP interface to the public Reticulum testnet — all simultaneously, all bridged at the routing layer, with the application code unaware of which interface a packet went over. Chapter 6 has the worked example.

Optical: free-space optical, IR, LiFi

For the sake of completeness: free-space optical mesh (FSO) is real, niche, and used mostly in fixed point-to-point inter-building links where licensed spectrum is unavailable or expensive. Infrared and LiFi exist as research curiosities. None of these are part of the practical mesh-networking conversation in 2026 and you will not see them again in this book except in passing. They are mentioned here because if someone tries to sell you a “next-generation optical mesh” startup, you should know what category they are claiming and that the category is real but small.

Sneakernet

The most reliable, highest-bandwidth, highest-latency mesh substrate in the world is a person walking with a USB drive. It scales beautifully (“how many terabytes can fit on a station wagon” is the canonical formulation). It is not a joke in mesh-networking contexts: store-and-forward, eventually-consistent, gossip-style protocols (Scuttlebutt is the canonical example, chapter 9) work well over sneakernet. Reticulum has a transport mode designed for it. So does USENET, in case you needed reminding that this isn’t a new idea.

The substrate matters. The protocol layer can compensate for it, but it cannot transcend it.

What cascades from the choice

Here is the thing to internalize. The physical layer does not just constrain bandwidth and range. It constrains:

  • Who can join your network. A LoRa network requires LoRa hardware. A Bluetooth-mesh network is free for anyone with a phone. A Tailscale network requires an Internet connection.
  • What regulations apply. Sub-GHz ISM bands have duty cycles. WiFi is unlicensed but power-limited. 5 GHz outdoor use is regulated differently than indoor in most countries. Amateur radio packet links require a license.
  • What the threat model can be. If your physical layer is local radio, an adversary needs to be in radio range to interfere. If your physical layer is the public Internet, the threat model is the entire Internet and you must encrypt everything.
  • What scale is achievable. Flooding works on a 30-node LoRa mesh. It does not work on a 30,000-node WiFi mesh. The same routing protocol behaves differently at different scales because the physical layer determines how many nodes are in earshot at once.
  • What the network can do during a disaster. A LoRa mesh can come up when the ISP is down. A Tailscale network cannot. This is a feature of both — they are different tools — but it is downstream of the physical layer.

What to take from this chapter

The next time you read a mesh-networking project’s pitch, ask:

  1. What’s the substrate?
  2. What’s the regulatory regime there?
  3. What’s the data rate at the long-range setting?
  4. What’s the duty cycle, if any?
  5. Who can join — does the user need to buy hardware, or do they already have what they need?

You will learn more from those five questions than from most landing pages. With those answered, the rest of the project’s design decisions stop looking arbitrary and start looking like the consequence of a substrate the engineers chose at the beginning and have been paying for ever since.

The next chapter is the routing problem itself: given that you have decided what to send bits over, how do you decide which node forwards which packet, when nobody has the whole map?

The Routing Problem

You have a population of nodes. Each node can hear some subset of the others. There is no central authority. There is no DHCP server, no BGP table, no operator at the top of the building dictating routes. A packet originates at Node A and needs to reach Node F. Nobody — including A and F — has the complete map. How does the packet get there?

That is the routing problem. It is the hard problem in mesh networking. It is harder than people who have only ever worked in datacenters expect, and once you understand why, the rest of the field’s design decisions stop looking arbitrary.

Why it’s hard

In a traditional IP network, routing is solved by infrastructure that does not exist in a mesh. Your laptop has an ARP table for the local LAN, a default gateway for everything else, and a DNS server it trusts. The default gateway has a routing table provisioned by an operator. The operator’s router runs BGP with their upstream’s router. There is a hierarchy. Each layer trusts the layer above to know more than it does, and at the top of the hierarchy is a small number of organizations (registries, root DNS operators, transit providers) that we have collectively decided to trust.

A mesh has none of that. There is no hierarchy. Any node can leave at any time. New nodes appear unannounced. Links come up, links go down — especially over radio, where a node’s reachability depends on weather, time of day, and whether someone moved a couch in front of an antenna. The set of nodes you can hear changes minute to minute.

So the routing protocol has to:

  1. Discover the topology as it changes, with no central authority publishing it.
  2. Decide who forwards what without consuming so much bandwidth in coordination that there is no bandwidth left for actual data.
  3. Scale to the network’s growth without every node having to remember a routing entry for every other node.
  4. Tolerate packet loss, link loss, and node loss as ordinary, expected events.

These constraints are in tension with each other. A protocol that discovers the topology aggressively burns bandwidth in coordination overhead. A protocol that conserves bandwidth misses topology changes and routes packets badly. A protocol that scales to thousands of nodes can’t afford to have every node know about every other node, so it must pick some nodes to know and some to forget — but that means some packets won’t have routes and the question becomes what to do then.

The history of mesh routing protocols is the history of different positions in this tradeoff space.

Strategy 1: Just flood it

The simplest possible routing protocol is no routing at all: every node rebroadcasts every packet it hears, with a TTL to prevent infinite loops and a cache to avoid rebroadcasting the same packet twice. This is flooding.

Flooding has the enormous virtue of being trivially simple to implement and trivially robust to topology change. There is no routing table. There is no protocol negotiation. A new node joins, it starts hearing packets, it starts forwarding them. A node dies, the others don’t notice or care.

Flooding has the enormous vice of consuming bandwidth proportional to the network size — every packet uses the airtime of every node within transitive earshot, regardless of whether the packet was destined for any of them. On a 10-node mesh, this is fine; the airtime cost of a flood is small. On a 100-node mesh, flooding starts to cost. On a 1000-node mesh, flooding is essentially a self-DDoS. The medium fills up with rebroadcasts, the actual messages drown in the noise, and the network’s effective capacity collapses.

This is not theoretical. Meshtastic uses flooding (with rebroadcast suppression and a small TTL), and the practical scaling ceiling for a single-channel Meshtastic mesh in a region is roughly 100 nodes before the flood-traffic begins crowding out the messages. Some communities have pushed it higher with careful channel allocation and node-density management; many have hit the wall at much lower numbers. The wall is structural. It is not a software bug that the next release will fix. Chapter 4 will return to this in detail.

Flooding is what most mesh-networking projects start with, because it’s the simplest thing that works, and most projects don’t get big enough to feel the pain. The ones that do — and that survive past the pain — replace it with something else.

The classical alternatives to flooding come from the wired-network world. Distance-vector protocols (RIP, BGP) have each node tell its neighbors what destinations it can reach and at what cost; routes propagate hop by hop, each node updates based on what it hears. Link-state protocols (OSPF, IS-IS) have each node flood a description of its links (not destinations); every node accumulates the floods and computes the topology locally.

Both have been adapted to mesh contexts. OLSR (Optimized Link State Routing) was the canonical link-state protocol for mobile ad-hoc networks; it is still around, used in Freifunk and other community wireless deployments, but it is no longer the bleeding edge. Babel is a more modern distance-vector protocol designed for both wired and wireless mesh, with good loop-avoidance properties; it ships in OpenWrt and is genuinely good at what it does. BATMAN-adv (Better Approach To Mobile Ad-hoc Networking) is a distance-vector-ish protocol that operates at layer 2 — it makes a multi-hop wireless mesh look like a single Ethernet segment to everything above it, which is delightful for some use cases and limiting for others.

These protocols work well at medium scale — tens to low thousands of nodes — and they have the major advantage of producing real routing tables that look like the routing tables you already understand. They have the disadvantages of (a) requiring nodes to maintain state proportional to the network size (every node needs to know about every reachable destination, which is what the routing table is), and (b) having to expend bandwidth re-converging when the topology changes.

The state cost is the killer. Routing-table size becomes the dominant constraint as networks grow. A mesh of a million nodes cannot have every node carrying a routing table with a million entries — not because the memory cost is impossible (a million IPv6 routes is a few hundred MB, large but not catastrophic) but because the bandwidth to keep that table converged on every node, in every node’s view of the network, on a substrate that may be 250 bit/s of LoRa, is utterly infeasible. The protocol overhead alone exceeds the link capacity.

This is the wall that more ambitious projects are trying to climb.

Strategy 3: Spanning-tree routing (Yggdrasil)

Yggdrasil takes a different approach. It builds a spanning tree over the network — a single, agreed-upon tree where every node has exactly one parent (except the root) — and then assigns each node a coordinate based on its position in the tree. Routing a packet to a destination becomes: look at the destination’s coordinate, look at your own, and forward the packet toward whichever neighbor’s coordinate is closer to the destination’s.

The brilliance here is that the routing decision is made locally, with no global state. Each node knows only its own coordinate and the coordinates of its immediate neighbors. The routing table size is bounded by the number of neighbors, not by the network size. A node can join a network of a billion others and still have a small routing table.

The cost is that spanning-tree routing produces suboptimal paths. The tree may route a packet between two nodes that are physically close but on different branches of the tree through the root, which is a long detour. Yggdrasil mitigates this with link-local shortcuts and other engineering, but the fundamental property remains: spanning-tree routes are not always shortest-path.

For a network where shortest path matters less than scalability — and for the use case Yggdrasil is built for, the IPv6 overlay over the public Internet, this is mostly true — the tradeoff is acceptable. Chapter 7 takes Yggdrasil seriously.

Strategy 4: DHT-based location lookup (cjdns)

cjdns took a different swing at the same problem. The idea was to use a Distributed Hash Table (think Kademlia, the algorithm that made BitTorrent’s tracker-less mode work) to look up the location of a destination — which neighbor to forward to in order to make progress — based on the destination’s cryptographic identity. Cryptography wasn’t an add-on; it was load-bearing. Your address was the hash of your public key, which made spoofing impossible and made the network self-validating in a way that earlier protocols were not.

This was beautiful. It did not work, at least not at the scale its proponents hoped. Hyperboria, the cjdns network of the early 2010s, hit a peak of perhaps a few thousand nodes and a long, slow decline thereafter. Chapter 8 covers what happened and why; for now, the relevant lesson for this chapter is: the DHT-based location-lookup approach taught the field that cryptographic identity at the routing layer is a real idea worth pursuing, but that DHT lookups in a high-churn environment have failure modes that the original designers underestimated. Reticulum learned from this. Yggdrasil learned from this. The lessons stuck even when the network didn’t.

Strategy 5: The Reticulum approach

Reticulum (chapter 6) takes a position that is, in retrospect, obvious but took a long time for the field to articulate. The core insight is: routing is path discovery, not route maintenance. You don’t try to keep a converged routing table. You don’t flood announcements. When a node wants to talk to a destination it has not talked to before, it sends a path request into the network. Nodes that know how to reach the destination respond with path announcements describing the route. The originating node caches the answer for as long as it remains valid.

This is, broadly, the architecture used in opportunistic and delay-tolerant networking research. Reticulum makes it production-grade by:

  • Pairing every node with a cryptographic identity, so path announcements are signed and unforgeable.
  • Building everything on a small set of carefully chosen primitives (Curve25519, Ed25519, SHA-256, AES-128) so the cryptographic costs are predictable.
  • Treating every interface — LoRa, WiFi, serial, I2P, TCP — as just another carrier for the same packet format, so a single network can span all of them.

The result is a routing protocol that scales to networks where many strategies above wouldn’t, on substrates where many strategies above wouldn’t run at all. Reticulum is not magic — it has its own tradeoffs (path setup latency is non-trivial; the network is not optimized for high-bandwidth bulk transfer) — but it is a serious engineering response to the routing problem that takes the constraints of low-bandwidth, high-churn substrates seriously from the start.

Where the projects sit in the design space

ProjectRouting strategyScaling regimeNotes
MeshtasticFlooding with rebroadcast suppressionUp to ~100 nodesSimple, robust, doesn’t scale
MeshCoreMulti-hop with explicit forwardingHundreds to low thousandsBetter than flood; embedded
ReticulumPath-discovery, signed announcementsDesigned for arbitrary scaleCryptographic identity at the core
YggdrasilSpanning-tree with coordinate routingDesigned for global scaleSuboptimal paths; small tables
cjdnsDHT-based location lookupTheoretically unlimited; in practice did not scaleLessons survived the network
Babel / OLSR / BATMAN-advDV / LS adapted for wirelessTens to low thousandsThe classical answer
Tailscale et al.Coordinator-mediated peer discoveryLimited by coordinatorDifferent problem entirely

The remaining chapters of part II and III work through the column on the right. The point of this chapter is that the column on the right is the consequence of the column on the left, not an arbitrary choice. Each project’s strengths and weaknesses are downstream of the routing strategy it picked, which was downstream of the substrate it picked, which was downstream of the user it was trying to serve.

What to take from this chapter

You should now be able to:

  • Explain why flooding is the simplest mesh routing strategy and why it doesn’t scale past ~100 nodes.
  • Explain why distance-vector and link-state protocols hit a wall in routing-table size as networks grow.
  • Recognize spanning-tree routing (Yggdrasil) and DHT-based location lookup (cjdns) as two distinct attempts to escape that wall, with different tradeoffs.
  • Recognize Reticulum’s path-discovery model as a third attempt, currently looking like the most promising of the three.
  • Read a project’s marketing copy and ask, “what’s the routing strategy, and does it actually scale to the network size you’re pitching?”

We are now done with the foundations. The next eight chapters are the projects themselves, starting with the one most readers should install first: Meshtastic.

Meshtastic

If you have a weekend, $50, and a vague sense that you should at some point feel mesh networking in your hands, this is the chapter that gets you there. Meshtastic is the entry point. It is the project that, for most readers of this book, will be the first time the abstract idea of “off-grid radio mesh” turns into two LED-blinking devices on a desk that send each other text messages with no Internet involved.

It is also a project that gets several things wrong, has a contentious recent history with parts of its own community, and will not scale to the regional network its marketing copy implies. This chapter will be honest about all of that. The honesty is not a reason not to start with Meshtastic; it is a reason to start with Meshtastic with eyes open.

What it is

Meshtastic is an open-source project that turns inexpensive LoRa radio boards into a multi-hop messaging mesh. You buy a small piece of hardware (typically $25–60 for the bare board, $30–150 for a starter kit including antenna, USB cable, and case), flash it with Meshtastic firmware, pair it to a phone via Bluetooth, and you have a node. Two nodes within radio range can exchange text messages, location pings, and small structured packets without any infrastructure. Three or more nodes form a mesh: messages between non-adjacent nodes are forwarded by the nodes in between.

The project began in 2019, hit broad awareness in 2022–2023 as the maker and prepper communities discovered it, and as of 2026 has by far the largest user base of any open-source mesh-networking project — well into six figures of nodes deployed worldwide, regional communities in dozens of countries, and a real (not just theoretical) presence in disaster-response scenarios. The Portugal-Spain blackout of April 2025 was the most prominent recent example: when the power grid went down across most of the Iberian peninsula, Meshtastic networks stayed up because they didn’t depend on the grid in the first place, and reports of people coordinating supplies and welfare checks over the local mesh circulated for days.

What you’ll buy

The shortest path to a working two-node Meshtastic setup in 2026:

ItemApproximate price (USD)Notes
Heltec LoRa V3 (× 2)$20–25 eachThe default starter board. ESP32-S3, OLED screen, USB-C, integrated battery connector.
868/915 MHz antenna (× 2)$5–10 eachMatch the frequency to your region. Don’t power on without an antenna.
2000 mAh LiPo battery (× 2)$8–12 eachOptional but worth it; otherwise tethered to USB.
USB-C cable (× 2)Already in your drawerFor flashing and charging.
3D-printed case (× 2)Free if you have a printer; $5–15 if you don’tThe bare board is fragile.

You’ll spend $80–150 for the pair. You can absolutely spend more — RAK Wireless’s WisBlock platform is more polished, the LilyGo T-Beam adds GPS and is a nicer all-in-one, and various small companies sell production-finished nodes for $100–250 — but two Heltec V3s will get you the experience.

Pick the right frequency for your region (chapter 2 has the table). A US-region radio (915 MHz) will not work in Europe, and vice versa. The vendors mostly ship region-specific SKUs; check before you buy.

What an evening looks like

Setting up Meshtastic in 2026 has gotten genuinely friendly. The current sequence:

  1. Plug board into computer. Visit the Meshtastic web flasher at flasher.meshtastic.org in a recent Chrome or Edge (it uses Web Serial). Select your board model. Select the latest stable firmware. Click flash. The web flasher does the rest.
  2. Install the phone app. Meshtastic has Android and iOS apps. Pair to the board over Bluetooth. The app walks you through naming the node, picking a region, and joining or creating a channel.
  3. Repeat for the second board. Both boards on the same channel will discover each other within seconds.
  4. Send a message. From the phone connected to board A, send a text. Board B’s phone receives it. The OLED screens on both boards light up. The LEDs blink. You will, if you are like most people who do this for the first time, grin involuntarily.

That’s it. You have a mesh network. Two nodes is technically the smallest mesh, and it does not yet feel meaningful. Where it gets interesting is when you add a third board, separate the three by enough distance that A cannot directly hear C, and watch B forward messages between them. Or you take one board outside, put a friend’s board on a hill across town, and discover that two nodes 4 km apart with line of sight are still in radio contact when nothing else is.

A worked example: a config snippet

Meshtastic’s configuration is via the phone app, the web client, or the meshtastic CLI. Here’s the CLI version of setting up a node — useful because it shows what’s actually being configured under the hood.

# Install the CLI
pip install --upgrade meshtastic

# Connect to a node over USB and read its config
meshtastic --info

# Set the region (this is required before transmit)
meshtastic --set lora.region US   # or EU_868, CN, JP, ANZ, KR, IN, etc.

# Set a long-range, slow channel preset
meshtastic --set lora.modem_preset LONG_FAST

# Name the node
meshtastic --set-owner "AlpineRelay" --set-owner-short "ALPN"

# Send a test message on the primary channel
meshtastic --sendtext "first packet over the mesh"

The LONG_FAST preset is Meshtastic’s default in 2026: spreading factor 11, 250 kHz bandwidth, achieving roughly 1 kbit/s of payload throughput at a range that comfortably exceeds 5 km in suburban terrain. There are slower presets (LONG_SLOW, VERY_LONG_SLOW) for genuinely far-apart nodes, and faster ones (MEDIUM_FAST, SHORT_FAST) for dense urban deployments where range is less of an issue.

What it gets right

Immediate usability. Most projects in this book require you to read documentation for a couple of hours before you have anything to play with. Meshtastic gets you to “I sent a message over a radio with no Internet involved” in under thirty minutes from opening the box. That matters. Most readers of this book are not going to commit to a project that takes a weekend just to bootstrap.

Hardware ecosystem. A dozen vendors ship Meshtastic-compatible boards. Antennas, cases, batteries, GPS modules, solar add-ons — there is a real supply chain. You will not be soldering things from scratch.

Community. Discord, Reddit, regional Telegram groups. Real numbers of real users. When something goes wrong with your node, someone has hit the same problem before. This is unusual in this space.

Disaster-deployment track record. The Portugal-Spain blackout of April 2025 is the headline example, but Meshtastic networks have come up during hurricanes (US gulf coast, multiple events), earthquakes (Türkiye, 2023, smaller scale), and protests where cellular networks were either down or assumed-untrustworthy. The track record is real, not aspirational.

Phone integration. Bluetooth pairing to an Android or iOS app means a non-technical user can interact with the mesh through a familiar messaging UI. This is a bigger deal than it sounds. The mesh is only useful if regular humans can use it.

What it gets wrong

Flooding-based routing. As discussed in chapter 3, Meshtastic’s routing is essentially flood-and-suppress: every node rebroadcasts every message it hears (deduplicated by message ID, with a small TTL). This is robust at small scale and degrades hard at large scale. The community-reported wall in 2026 is around 100 nodes per channel in a region; some deployments push higher with careful channel allocation, others hit the wall earlier when nodes are densely packed. The project has experimented with smarter routing (the “next hop” routing introduced in firmware ~2.5 was a step toward less wasteful forwarding) but the fundamental architecture remains flood-shaped.

Encryption was retrofitted, not designed in. Meshtastic encrypts channel traffic with AES-256, but the original design did not have encryption at the core, and this shows. Channel keys are pre-shared symmetric keys with weak primary-channel defaults; direct messages between users use a derived key with limitations that have been documented and partially addressed across firmware versions. Compared to Reticulum, which has cryptographic identity at the core of the protocol, Meshtastic’s encryption story is “good enough for the casual threat model, not designed for adversaries who care.”

For most uses — local hobby networks, prepper coordination, hiking groups — this is fine. For uses where the threat model is serious (journalists, activists, anyone who would be in trouble if traffic were intercepted) Meshtastic is not the right tool, and the project documentation has gotten more honest about saying so.

The CLA controversy. In 2024–2025 the Meshtastic project introduced a Contributor License Agreement that requires contributors to grant a broad license to their contributions to the project’s umbrella organization. A non-trivial portion of the contributor community pushed back, with concerns ranging from “this enables future relicensing” to “this is governance creep.” Some long-time contributors stopped upstreaming work. Some have migrated to MeshCore (chapter 5), which has explicitly positioned itself as not requiring a CLA. The Meshtastic project remains GPLv3-licensed in 2026, but the governance dispute is real and the contributor base is more fragmented than it was in 2023. Readers evaluating which project to invest long-term effort in should be aware.

Regional fragmentation. This is not Meshtastic’s fault — it is a consequence of the LoRa physical layer, as chapter 2 covered — but it bears repeating: a Meshtastic mesh in Asia cannot directly interoperate with a Meshtastic mesh in Europe or the Americas. The regional bands are different. The “global Meshtastic mesh” exists only as a federation of regional meshes connected over Internet bridges (typically MQTT). This is fine, but it is not “the network is global.”

Routing-table size as the network grows. Meshtastic’s flood-with-suppression has each node maintain at least a brief cache of recently-seen message IDs to suppress rebroadcasts. As node density increases, this cache and the duty-cycle pressure compete: the cache wants to be larger, the airtime wants to be smaller, and the network’s effective capacity decreases.

License and governance

GPLv3, with the CLA discussed above. The project is governed by the Meshtastic Solutions LLC umbrella, which holds the trademark and runs the official infrastructure (web flasher, MQTT bridges, default channel definitions). The firmware repository is at github.com/meshtastic/firmware; the Android app is github.com/meshtastic/Meshtastic-Android; the protobuf definitions (which are the actual interoperability surface) are github.com/meshtastic/protobufs.

Forks exist. Some are technical (alternative firmware variants targeting specific hardware), some are governance-driven (forks of the firmware that pre-date the CLA). MeshCore, discussed in chapter 5, is not a Meshtastic fork in the strict sense, but it occupies adjacent ground and has absorbed contributors from both the technical-disagreement camp and the governance-disagreement camp.

Why it’s still the right first recommendation

After all of the above, the recommendation in chapter 12 will still be: start with Meshtastic. Here’s why.

The thing you want to feel, the first time you do this, is the basic phenomenon: that two boards $30 apart can talk to each other over kilometers of distance with no Internet involved. Meshtastic gets you to that feeling in an evening. Reticulum is a more serious project but the bootstrap cost is higher. MeshCore is more carefully engineered but the user experience is less polished. Yggdrasil is a different category entirely. For the first contact, Meshtastic is right.

Once you have felt the phenomenon, you can be more thoughtful about where to invest from there. Chapter 12 walks through that decision tree explicitly. But the entry door is Meshtastic, and the entry door has been thoughtfully designed even if some of the rooms beyond it are crowded and some of the foundation is the wrong shape for the building they’re now trying to build.

Where to go next

  • Official docs. meshtastic.org is the canonical reference. The “Getting Started” path is genuinely good.
  • Web flasher. flasher.meshtastic.org — Web Serial flashing in the browser, the easiest way to get firmware on a board.
  • Hardware index. The “Supported Hardware” page on meshtastic.org covers compatible boards and their region restrictions; check it before buying.
  • Erethon’s comparison post. Greek engineer Erethon wrote a widely-circulated 2024 piece comparing Meshtastic, MeshCore, and Reticulum that captured the prevailing community recommendation: start with Meshtastic, adopt MeshCore for the upper end of network size, invest in Reticulum for serious work. That framing has held up well into 2026 and the rest of part II of this book is, in effect, an expansion of it.
  • Privacy Guides. The Privacy Guides community discussion on Meshtastic versus alternatives, particularly around encryption and the CLA, is worth reading if those concerns are load-bearing for you.

What to take from this chapter

You should now be able to:

  • Buy two boards, flash them, pair them to phones, and exchange messages over a mesh you built yourself, in an evening.
  • Explain why Meshtastic’s flooding-based routing limits practical scale to roughly 100 nodes per region.
  • Name the major weaknesses (encryption design, CLA governance, regional fragmentation) without being dissuaded from starting here anyway.
  • Hold an opinion about when to graduate from Meshtastic to MeshCore or Reticulum, and why someone would.

The next chapter is MeshCore, the engineered alternative for when Meshtastic’s wall is the wall you’ve hit.

MeshCore

If chapter 4 introduced Meshtastic as the entry door, this chapter introduces the door past the entry door. MeshCore is what serious community deployments adopt when they’ve outgrown Meshtastic’s flood-based routing and they’re not yet ready to switch to a different category of network entirely. It is a more carefully engineered project, less polished as an end-user product, and lives at a slightly different point in the stack.

It is also a smaller project, with a smaller community, and that is part of the honest tradeoff.

What it is

MeshCore is a C++ embedded library for LoRa-based mesh networking. The phrase “embedded library” is doing real work in that sentence. Where Meshtastic ships as a complete firmware-plus-app product — you flash a board, you pair a phone, you have a working messaging system out of the box — MeshCore ships as a building block. It provides a multi-hop packet routing layer, a configurable security model, and the primitives to build mesh-shaped applications on top. The applications themselves are mostly the user’s responsibility, though several reference applications and companion projects exist.

This positioning is important. MeshCore is not trying to be Meshtastic-but-better in the consumer-product sense. It is trying to be the substrate on which someone might build a Meshtastic-better in the consumer-product sense, or might build something else entirely — a sensor network, a community alerting system, a custom-application mesh — without inheriting Meshtastic’s architectural choices.

What it gets right

Multi-hop packet routing that isn’t flooding. This is the headline feature. MeshCore implements an explicit forwarding model: nodes maintain neighbor information and forward packets along discovered paths, with deterministic loop avoidance and per-packet routing decisions. It is closer in spirit to the distance-vector and link-state protocols of chapter 3 than to Meshtastic’s flood-and-suppress. The practical consequence is that the airtime cost of a packet does not scale linearly with the network’s node count the way Meshtastic’s does. A 200-node MeshCore mesh remains usable in a way a 200-node Meshtastic mesh, on the same hardware, in the same density, generally does not.

The cost is that MeshCore must do real route discovery and maintenance — the routing-table and protocol-overhead concerns from chapter 3 apply — but the project has been designed around making that overhead bounded and predictable rather than letting it grow unbounded.

Configurable security model. MeshCore lets the application decide how strong the security model needs to be. End-to-end encryption is supported (X25519 key exchange, AES-GCM payloads in current implementations), and the routing layer doesn’t require all packets to be encrypted with the same key, which means you can build applications with per-conversation keys, group keys, broadcast channels, and signed-but-not-encrypted public announcements all on the same mesh. Compared to Meshtastic’s “channels have a shared key, direct messages have a derived key” model, MeshCore gives the application more rope and assumes the application knows what it’s doing with it.

For most users this is a tradeoff in the wrong direction — most users do not want to make security decisions and do not benefit from the additional flexibility. For users building something that ships to other users, it is the right tradeoff: the security model is the application’s, not a one-size-fits-all default.

Embedded-friendly footprint. The library is C++, runs on the same ESP32 / nRF52 microcontroller class of hardware as Meshtastic, and is designed to be linked into a custom firmware rather than requiring you to use the project’s reference firmware. A MeshCore-based product can be a single-purpose appliance — a moisture sensor, a panic button, a livestock tracker — with the mesh layer embedded and no other Meshtastic-shaped baggage.

No CLA. This is governance, not engineering, but it has driven real adoption in 2025–2026. Contributors who left Meshtastic over the CLA have, in non-trivial numbers, landed on MeshCore. The license terms are conventional permissive (MIT-shaped, depending on which subcomponent), the contribution process is git-shaped, and the project’s stated stance is that it intends to remain that way. Whether that stance holds in five years is a question the reader will have to evaluate when five years have passed; in 2026 it is one of the project’s draws.

What it gets wrong (or, more honestly, hasn’t gotten right yet)

The user experience is rough. “Roll your own application on top of an embedded library” is a developer’s product, not an end user’s. There is no equivalent of Meshtastic’s pair-a-phone-and-message-your-friend onboarding. Reference applications exist but are not as polished. If you are reading this book to play with mesh networking, MeshCore should not be your first stop. If you are reading it to build something, MeshCore is on the shortlist.

Smaller community. Meshtastic has six figures of nodes deployed and a Discord with thousands of active members. MeshCore has hundreds-to-low-thousands of users and a much smaller forum and Discord presence. Both are real numbers, but the difference matters when you have a problem at 2 AM and need someone to have hit it before. The hours of community help available are dramatically different.

Fewer pre-built integrations. Phone apps, MQTT bridges, web dashboards — Meshtastic has a real ecosystem of these. MeshCore has fewer, and they are less polished. If your project depends on integrating with the existing world of Meshtastic-shaped tools, switching to MeshCore costs you those integrations.

Documentation is the documentation of an engineering project, not a consumer product. This is fine if you came in expecting it. It is jarring if you are arriving from Meshtastic’s user-facing docs and expecting the same level of polish.

A worked example: a minimal MeshCore packet send

The MeshCore API in C++ at the time of writing is roughly shaped as follows. This is intentionally a sketch — the project’s APIs evolve and the canonical reference is the project’s own documentation — but the shape will give a feel for what working with the library looks like.

#include <MeshCore.h>

MeshCore mesh;

void setup() {
  Serial.begin(115200);

  // Initialize the LoRa radio (board-specific pins)
  mesh.begin(LORA_CS, LORA_IRQ, LORA_RST, LORA_BUSY);

  // Set the region and modulation parameters
  mesh.setRegion(MeshCore::REGION_US_915);
  mesh.setSpreadingFactor(11);
  mesh.setBandwidth(250e3);

  // Generate or load this node's identity (X25519 keypair)
  mesh.identity().load_or_generate("/identity.key");

  // Register a handler for received packets
  mesh.onPacket([](const Packet &p) {
    Serial.printf("From %s: %s\n",
                  p.sender_id().to_hex().c_str(),
                  p.payload_as_string().c_str());
  });
}

void loop() {
  static uint32_t last_send = 0;
  if (millis() - last_send > 60000) {
    // Send a broadcast announcement once a minute
    Packet p;
    p.set_destination(MeshCore::BROADCAST);
    p.set_payload("hello from node 0x" + mesh.identity().short_hex());
    mesh.send(p);
    last_send = millis();
  }

  mesh.loop();
}

The texture is recognizable to anyone who has done embedded development: there’s a setup(), there’s a loop(), there’s a callback for received packets, the radio is configured up front, the identity is persisted to flash. What is not there — and what makes MeshCore worth caring about — is the flooding logic, the duty-cycle bookkeeping, the route discovery, and the cryptographic key management. Those are inside the library. The application code is the application code.

Where MeshCore fits in the stack

Think of the LoRa-mesh ecosystem in 2026 as roughly:

  • Reticulum — the most ambitious project, transport-agnostic, cryptographic identity at the core, designed for arbitrary scale. Chapter 6.
  • MeshCore — engineered embedded library, multi-hop routing, configurable security, smaller community.
  • Meshtastic — polished consumer-product experience, flooding-based routing, large community, the entry door.

These are not strictly competitive — a serious deployment might run Meshtastic for the consumer-facing nodes, MeshCore for the custom-firmware appliances, and Reticulum for the gateway nodes that bridge to the rest of the world. But for a single weekend project, the choice between them is real, and chapter 12 has the decision tree.

When to actually pick MeshCore

The honest cases are narrow but real:

  1. You are building a product, not running a hobby network. You have a specific application in mind — sensor telemetry, asset tracking, custom messaging — and you want a mesh substrate without inheriting an entire opinionated firmware stack.
  2. You are running a community deployment that has outgrown Meshtastic. The 100-node wall is a real wall, you’ve hit it, you need a more disciplined routing protocol, and you are willing to give up some of Meshtastic’s UX polish to get there.
  3. You care about CLA-free governance and want your contributions to land in a project whose contribution model you’re comfortable with for the long term.

If none of those describe you, Meshtastic for play and Reticulum for serious work are likely the better picks, with MeshCore as a project to watch rather than one to invest in this weekend.

License and project status

MeshCore is permissively licensed (MIT for the core; component licenses vary). As of 2026 the project is active — regular commits, recent releases, an engaged maintainer team — but it is genuinely smaller than Meshtastic. “Smaller” is not “dormant.” It is “fewer hands and slower releases.” Plan accordingly.

Repository: github.com/meshcore-dev/MeshCore (canonical at time of writing; verify before relying on it for a long-term plan).

What to take from this chapter

You should now be able to:

  • Position MeshCore in the LoRa-mesh stack relative to Meshtastic and Reticulum.
  • Explain why MeshCore’s explicit-forwarding routing is structurally different from Meshtastic’s flooding and what that buys you.
  • Recognize the cases where MeshCore is the right pick and the cases where it is not.
  • Have an opinion about which way you’d lean if a community deployment you cared about was hitting Meshtastic’s scaling wall.

The next chapter is the project that, more than any other in this book, is the one to invest in if you are going to invest seriously: Reticulum.

Reticulum

If this book has a thesis, it is the one in this chapter. Reticulum is the most important mesh-networking project for a 2026 reader to understand. Not because it has the largest user base — it doesn’t — but because it is the only project surveyed in this book that takes seriously, simultaneously, the things every project ought to: cryptographic identity, transport-agnostic routing, end-to-end encryption by default, and operation on substrates that are slow, lossy, and unreliable.

The recommendation that has emerged in the mesh-networking community over 2024–2026 — start with Meshtastic, adopt MeshCore for the upper end of network size, invest in Reticulum for serious work — has Reticulum at the end of that arrow for a reason. This chapter is why.

What it is

Reticulum is a networking stack. Not a router, not an application, not a protocol bound to a particular substrate — a stack, in the same sense that TCP/IP is a stack. It sits at roughly the layer that IP sits at, but with very different design choices, motivated by very different constraints.

The core design commitments:

  • Cryptographic identity at the routing layer. Every Reticulum endpoint has a cryptographic identity — a Curve25519 + Ed25519 keypair. Addressing is by hash of the public key. Identities are unforgeable, packets are signed, and the system as a whole does not depend on any external trust authority for naming.
  • End-to-end encryption by default. Every Reticulum link between endpoints is encrypted with a forward-secret key derived via X25519. Intermediate nodes route encrypted packets without seeing their contents, and without knowing — for opaque destinations — who the original sender was.
  • Transport-agnostic. A Reticulum network can run over LoRa, WiFi, serial, I2P, TCP, or anything else that can carry bytes, with a single routing layer that bridges across substrate boundaries transparently. The application sees one network. The packets travel over whichever combination of links the routing layer thinks is best.
  • Designed for slow, lossy substrates. The protocol is not retrofitted from datacenter assumptions. It assumes packet loss, high latency, and limited bandwidth as ordinary operating conditions, not edge cases.
  • MIT-licensed. Permissive, no CLA, the source is the source.

These commitments compound in ways that the project’s marketing copy can’t quite communicate without sounding overheated. The combination is what matters: cryptographic identity plus transport-agnostic routing plus end-to-end encryption plus designed-for-slow-substrates. Each one alone is interesting; together, they describe a stack that is genuinely different from what came before.

The killer property

Here is the property that, more than any other, justifies the time investment in Reticulum.

A single Reticulum network can simultaneously run over LoRa + WiFi + I2P + serial + Ethernet, transparently bridged.

Consider what that means in practice. You have a Reticulum daemon on your laptop. It has interfaces configured for: a LoRa radio attached over USB, the local WiFi network, an I2P tunnel to the public Reticulum testnet, and a TCP socket to a friend’s home server. Your laptop’s daemon sees all of these as carriers. When an application asks Reticulum to send a packet to a destination identity, the routing layer decides — based on which paths it has discovered — which interface or interfaces to send it over. The application doesn’t know. The application doesn’t need to know.

Two consequences:

  1. The network is robust to the loss of any one substrate. If your ISP goes down, the I2P and TCP interfaces stop working, but the LoRa and WiFi interfaces keep going. If the LoRa antenna falls off, the rest keeps working. You don’t have to migrate the network; the network is already on multiple substrates.

  2. The network can bridge between substrates that no single user could individually reach. A hiker in the woods with a LoRa-only node can reach a node in a city via a chain of intermediate nodes — some LoRa, some WiFi, some Internet — without any of those nodes having to do anything other than be themselves. The bridging is the routing layer’s job.

This is what TCP/IP almost was, before pragmatic specialization down the stack made every interface require its own routing layer. Reticulum is not trying to replace TCP/IP — it runs on top of TCP/IP as one substrate among many — but it does something TCP/IP gave up on, which is presenting a unified routing layer across radically heterogeneous links.

Cryptographic primitives

Reticulum uses a small, deliberately-chosen set of cryptographic primitives:

  • Curve25519 for X25519 ECDH key exchange and Ed25519 signatures. Identities are 32-byte public keys; the project uses a hash of the public key as the routing-layer address.
  • AES-128 in CBC mode with HMAC-SHA-256 for symmetric authenticated encryption (the choice predates AES-GCM’s wide availability in low-resource environments, and the project has resisted swapping it without good reason).
  • SHA-256 for hashing throughout.
  • Forward-secret session keys for every link, with periodic rekeying.

The primitive choices are conservative. The implementation has been reviewed by community cryptographers but has not had a formal academic audit at the time of writing. For users whose threat model demands one, that is a genuine limitation. For users whose threat model is “I’d rather not have my mesh traffic readable by the next ham operator with a recording,” Reticulum’s cryptography is by far the best of the projects in this book.

RNS: the implementation

The reference implementation of Reticulum is RNS (Reticulum Network Stack), written in Python. Yes, Python — on a project that runs on microcontroller-class hardware. The choice is deliberate: Python’s reach into Linux, microcontrollers (via MicroPython), Android (via Termux), and embedded systems makes it possible for one codebase to run nearly everywhere a Reticulum node might want to run, at the cost of some performance ceiling that isn’t load-bearing on slow substrates anyway.

For genuinely embedded targets — ESP32-class microcontrollers running native firmware — the Reticulum project also ships a C/C++ implementation and a microcontroller-friendly subset, used in the RNode firmware described below.

RNode: the hardware

RNode is the canonical Reticulum hardware platform. It is, physically, a LoRa radio board (typically the LilyGo T-Beam, T-Echo, or similar) running RNode firmware, which presents a Reticulum-aware interface to a host computer over USB or Bluetooth. You can buy pre-built RNodes from the project store for around $100, build your own from off-the-shelf boards and the published firmware for less, or convert an existing Meshtastic-compatible board.

An RNode is not strictly required to use Reticulum — the project’s RNS daemon will happily run over WiFi, I2P, or TCP without any radio at all — but it is the canonical way to add a long-range radio interface to your Reticulum stack, and it is the entry point for most users who want the off-grid part of off-grid mesh.

Sideband and MeshChat: the application layer

Reticulum is a stack, not an application. The applications that run on top of it include:

  • Sideband — the official Reticulum messaging app, available for Linux, macOS, Windows, and Android. The closest thing to “the user-facing UI of Reticulum.” Lets you message individual identities, join broadcast channels, and run as a routing node.
  • MeshChat — a community-built web-UI messaging client that runs on top of RNS and gives you a browser-based mesh chat experience. Particularly nice for shared deployments (e.g., a Raspberry Pi running RNS and MeshChat together as a community node).
  • Nomad Network — a terminal-UI messaging and content-sharing application. Old-school, beautifully understated, and a delight to use over a slow link.
  • Various smaller projects — file transfer, voice (yes, low-bitrate voice over LoRa is real and works, with caveats), distributed dashboards, telemetry uplinks.

The application layer is smaller and less polished than Meshtastic’s. This is a real cost. Reticulum users routinely have to choose between “use the existing applications, which are good but few” and “build the application I actually want, using the Python API.” Most readers of this book will, on first contact, find Sideband sufficient.

A worked example: a Reticulum cross-substrate bridge

Here is a concrete configuration that demonstrates the killer property. The file is ~/.reticulum/config after rnsd has been run once to generate a default. (Format edited slightly for readability.)

[reticulum]
  enable_transport = True
  share_instance = True

[interfaces]
  # An RNode on USB serial, talking to nearby LoRa nodes.
  [[RNode LoRa Interface]]
    type = RNodeInterface
    enabled = True
    port = /dev/ttyUSB0
    frequency = 915000000
    bandwidth = 125000
    txpower = 17
    spreadingfactor = 11
    codingrate = 5

  # A UDP broadcast on the local WiFi, talking to other Reticulum nodes
  # in the same building.
  [[Local WiFi Interface]]
    type = AutoInterface
    enabled = True
    devices = wlan0

  # A TCP client connecting to the public Reticulum testnet for
  # bridging to the wider network.
  [[RNS Testnet]]
    type = TCPClientInterface
    enabled = True
    target_host = amsterdam.connect.reticulum.network
    target_port = 4965

  # An I2P tunnel for anonymous-bridging to identities reachable that way.
  [[I2P Bridge]]
    type = I2PInterface
    enabled = True
    peers = anonymouspeer.b32.i2p

That single configuration block is a node with four substrates. Once rnsd is started, your applications — Sideband, Nomad Network, the Python API — talk to a unified network that spans LoRa, WiFi, I2P, and TCP. A path request to a remote identity will explore all four. A packet sent will be routed over whichever substrate the path discovery says to use. The application code is unchanged regardless of which substrate it ends up traveling over.

This is what people mean when they say Reticulum is taken seriously.

A worked example: the Python API

Reticulum’s Python API is the build-your-own-tooling story. The minimal “send a message between two identities” looks like this:

import RNS
import time

# Initialize the Reticulum stack (uses ~/.reticulum/config)
RNS.Reticulum()

# Generate or load this node's identity
identity = RNS.Identity.recall_app_data() or RNS.Identity()
identity.store_app_data()

# Register a destination — an addressable endpoint on this identity
destination = RNS.Destination(
    identity,
    RNS.Destination.IN,
    RNS.Destination.SINGLE,
    "example_app",
    "messaging",
)

# Set up a callback for incoming packets
def packet_callback(data, packet):
    print(f"Received: {data.decode('utf-8')}")
destination.set_packet_callback(packet_callback)

# Send a packet to a known peer's destination hash
peer_hash = bytes.fromhex("a1b2c3d4...")  # the peer's destination hash
peer_dest = RNS.Destination(
    RNS.Identity.recall(peer_hash),
    RNS.Destination.OUT,
    RNS.Destination.SINGLE,
    "example_app",
    "messaging",
)
RNS.Packet(peer_dest, b"hello, mesh").send()

# Stay alive to receive
while True:
    time.sleep(1)

This is roughly thirty lines for a working two-way Reticulum messaging client. There is no socket setup. There is no key exchange to manage manually. There is no NAT traversal logic. The cryptographic identity is a single object you generate and store; the routing is the stack’s job; the encryption is automatic. That is the level of abstraction Reticulum is operating at.

What it gets right

  • The stack as a whole. No part of Reticulum feels bolted on. Cryptographic identity, routing, encryption, transport-agnosticism — these were design commitments from day one and the architecture reflects that.
  • The substrate range. No other project in this book runs on as many substrates with as little ceremony.
  • The license and governance. MIT-licensed, no CLA, single-maintainer-driven (which is both a feature and a risk; see below).
  • The documentation. The Reticulum manual is genuinely excellent. It is one of the better-written technical documents in the open-source mesh-networking world. A reader who has finished this chapter can productively read the manual cover-to-cover in a long evening and come out of it with working knowledge.

What it gets wrong (or, more honestly, what it doesn’t yet do well)

  • Smaller community than Meshtastic. Real numbers as of 2026 are low five figures of users globally, vs. Meshtastic’s six. The community is more technical and more engaged per capita, but the absolute size matters for “someone has hit my problem before.”
  • Application layer is thin. Sideband and MeshChat are good. They are not as polished as the Meshtastic Android app. If your bar is “non-technical family member can use this,” Reticulum is closer than it was in 2022 but not yet there in the way Meshtastic is.
  • No formal cryptographic audit. Mentioned above. The primitives are conservative and the design is sound, but a formal audit has not been published. For audit-level threat models, this is a real gap.
  • Single-maintainer governance risk. The project has primarily been driven by one person (Mark Qvist, “markqvist” on GitHub). The work is excellent. The bus factor is one. The project does have a small number of regular contributors and the codebase is in good enough shape that it would survive a maintainer transition, but the risk is real and worth naming.
  • Path-discovery latency. The first packet to a never-before-contacted destination has to do path discovery, which takes seconds-to-tens-of-seconds depending on the substrate. Subsequent packets to the same destination are cached and fast, but the first-contact cost is real and noticeable.

The recommendation, plainly

If you are going to invest one weekend in mesh networking, spend it on Meshtastic. If you are going to invest one month, spend the first weekend on Meshtastic to feel the phenomenon, and the rest of the month on Reticulum, with the goal of getting to “I have a Reticulum node that bridges three substrates and I have written a small custom application against the Python API.” Reticulum will reward the additional investment in a way that Meshtastic, MeshCore, and Yggdrasil — for different reasons each — will not.

Where to go next

  • The Reticulum manual at markqvist.github.io/Reticulum/manual/. Cover to cover; it is that good.
  • The reticulum.network site for the project overview and links to applications.
  • The community at the Discord linked from the project page. Smaller, technical, generally helpful.
  • RNode hardware from the project store or built from off-the-shelf boards.
  • Sideband as the first application to install. MeshChat as the second. Nomad Network as the gift to your future self.

What to take from this chapter

You should now be able to:

  • Explain why Reticulum is structurally different from Meshtastic and MeshCore: cryptographic identity, transport-agnostic routing, end-to-end encryption, all by default, all from the start.
  • Configure (in principle) a Reticulum node that bridges multiple substrates simultaneously.
  • Write a minimal Python application against RNS that sends and receives encrypted packets over a mesh.
  • Hold an opinion on which project is the long-term investment for a serious user, and why the consensus has converged on this one.

The next chapter shifts category: from LoRa-shaped networks running on hardware in your hand to IPv6-shaped networks running on the public Internet but routing differently. Yggdrasil.

Yggdrasil

The previous three chapters were about networks where the substrate is radio and the goal is to operate independent of the public Internet. This chapter is about a network where the substrate is the public Internet (and any other reachable transport) and the goal is to route differently than the public Internet’s BGP-and-tier-1-providers arrangement does. Yggdrasil is an IPv6 overlay mesh, and it is the project for the reader who wants a real network of computers that doesn’t depend on the public Internet’s routing being fair.

What it is

Yggdrasil is an end-to-end encrypted IPv6 overlay network with a self-organizing routing protocol based on a spanning tree. You install the Yggdrasil daemon on a machine; it generates a public/private keypair; the IPv6 address of the machine in the Yggdrasil network is derived deterministically from the public key (specifically, it lives in the 200::/7 prefix, with the specific address derived from a hash of the public key). The daemon connects to peers — initially via a small set of public peers from the Yggdrasil community, then directly to other Yggdrasil nodes as it discovers them — and exchanges spanning-tree-routing information with them.

Once running, every Yggdrasil node has an IPv6 address and can reach every other Yggdrasil node by that address, with end-to-end encrypted traffic, with no central authority involved. From the operating system’s perspective, Yggdrasil presents a TUN interface; from an application’s perspective, the Yggdrasil network is just IPv6, and any IPv6-aware application can use it without modification.

Why it exists

The pitch for Yggdrasil — and for the broader category of self-routing IPv6 overlays — comes from a specific dissatisfaction with how the public Internet routes. The dissatisfaction has multiple flavors, and different users come to Yggdrasil for different reasons:

  • Routing politics. BGP routes are decided by tier-1 providers based on commercial agreements. Some routes go through countries you’d rather they didn’t. Some don’t exist at all because no provider has commercial interest in establishing them. A self-organizing overlay routes through whichever peers are available, irrespective of who’s selling transit to whom.
  • Censorship resistance. A network where there is no central authority and traffic is end-to-end encrypted is harder to selectively block than one where it is centralized (Tailscale’s coordinator, ZeroTier’s controller).
  • Just being interesting. Some users run Yggdrasil because they like running Yggdrasil. This is a legitimate reason in the open-source ecosystem and the project’s docs are honest about it being part of the appeal.

The routing model, briefly

Chapter 3 covered spanning-tree routing as one strategy in the routing-protocol design space. Yggdrasil is the canonical implementation of this approach in production today.

The mechanics in summary:

  • Every node has a cryptographic identity (Ed25519). Its IPv6 address is derived from a hash of its public key.
  • The network self-organizes into a spanning tree. There is one logical root at any given time, chosen by an algorithm that prefers stable, well-connected nodes; the root’s identity is announced through the network and the tree is rebuilt when it changes.
  • Each node’s coordinate in the tree is a sequence of port numbers describing its path from the root.
  • To route a packet, a node looks at the destination’s coordinate (which it can ask for via a DHT lookup against the destination’s public-key hash, but it usually has cached) and forwards to the neighbor whose coordinate is on the shortest path toward the destination.

The size of the routing decision a node makes is bounded by the number of its direct peers, not the size of the network. A node with 20 peers has a 20-entry forwarding table, regardless of whether the network has a hundred or a hundred thousand nodes. This is the property that lets Yggdrasil claim, plausibly, to scale to global-network sizes with manageable per-node state.

The cost is that spanning-tree paths are not always shortest. Two nodes in the same city might route through a peer in another country if that’s where the spanning tree happens to go. The project mitigates this with link-local shortcuts and other engineering, but the fundamental tradeoff stands: bounded routing state in exchange for sometimes-suboptimal paths.

What you get when you install it

A worked example is in order, because Yggdrasil is one of the easier projects in this book to actually run.

# Linux, Debian-derived
sudo apt install yggdrasil

# Or build from source (Go required)
git clone https://github.com/yggdrasil-network/yggdrasil-go
cd yggdrasil-go
./build

# Generate a default config
yggdrasil -genconf > /etc/yggdrasil.conf

# Start the daemon
sudo systemctl start yggdrasil

After starting, ip addr show ygg0 shows your Yggdrasil IPv6 address. It will be in 200::/7, looking something like 200:abcd:1234:5678:9abc:def0:1234:5678.

By default, the daemon needs to find peers. The Yggdrasil community maintains a list of public peers; out of the box, the config will be empty and the daemon will only connect to nodes you explicitly configure. Add some:

# /etc/yggdrasil.conf, partial
Peers:
  - tls://example-peer-1.example.org:443
  - tcp://example-peer-2.example.org:8443
  - tls://[2001:db8::1]:443

# Optionally listen for incoming peers, if you have a public IP
Listen:
  - tls://0.0.0.0:443

The current public peer list is at github.com/yggdrasil-network/public-peers; pick three or four geographically diverse ones and you’re connected to the Yggdrasil network. Restart the daemon and within a few seconds your node has joined the spanning tree.

A worked test:

# Find a known service on the Yggdrasil network
ping6 200:6e7c:5f9c:50bb:d8b9:0a5e:7c13:abcd

# Or browse to one of the in-network sites
curl -6 'http://[200:1234:5678::1]/'

There are real services running on Yggdrasil — Git mirrors, IRC servers, status pages, the project’s own infrastructure. The network is small but it is alive. As of 2026, public node counts on the Yggdrasil network sit in the low five figures depending on how you measure. The number is meaningful — it’s a real network with real people running real services on it — without being mass-market.

What it gets right

Bounded routing state. This is the central engineering claim and Yggdrasil delivers on it. A node in a 10,000-node network does not have a 10,000-entry routing table. It has a peers-list-sized routing table. The network can grow without exploding state per node. This is a property the alternatives in this design space have not all achieved.

End-to-end encryption that is genuinely end-to-end. Yggdrasil traffic between two nodes is encrypted at the source and decrypted at the destination. Intermediate nodes carry encrypted traffic without the keys to read it. This is what end-to-end encryption is supposed to mean and Yggdrasil’s implementation lives up to it.

IPv6-native, transparent to applications. From an application’s perspective, Yggdrasil is a TUN interface that lets you reach IPv6 addresses in 200::/7. No application changes are needed. SSH, HTTPS, IRC, your own custom protocol — they all work, they’re all encrypted by Yggdrasil’s link-layer crypto, and the application doesn’t need to know.

Cross-platform. Linux, macOS, Windows, FreeBSD, OpenBSD, Android (via the Yggdrasil Android app) all have working clients. The Go-based reference implementation is the same code on all of them.

Active in 2026. The project ships releases, the community maintains the public peers list, the development is steady. Not booming, not dying. Alive.

What it gets wrong (or is honestly limited about)

Path suboptimality. Discussed above. For most uses (latency-tolerant traffic, file transfers, IRC, web browsing) the suboptimality is invisible. For latency-sensitive uses (real-time voice, gaming) the spanning-tree path is sometimes a problem.

Bootstrap requires public peers. A node has to know how to find at least one peer to join the network. The community-maintained public peers list is the practical solution; it works, but it is a centralization point in a design that wants not to have any. The project has discussed alternatives (multicast peer discovery on the local network is supported and works) but for reaching the global Yggdrasil network you do start by connecting to known peers.

Not for off-grid. Yggdrasil is not a LoRa-mesh competitor. It runs over the public Internet (and other reachable transports). When the Internet is down, your Yggdrasil network is down too, modulo whatever local-network paths you’ve configured. This is fine — it is not what Yggdrasil is for — but a reader who has just finished the Reticulum chapter should not confuse the two.

Smaller community than mesh-VPN alternatives. A reader looking for “give my homelab a flat IPv6 network” is more likely to use Tailscale (chapter 11) than Yggdrasil. Yggdrasil’s appeal is the self-organizing, no-central-authority property; if you don’t need that, Tailscale is more polished and faster to set up. Yggdrasil knows this. The user base reflects it.

Routing decisions are not user-controllable. You don’t get to say “route my traffic through these specific nodes.” The protocol decides. This is by design but occasionally frustrating.

When to actually run a Yggdrasil node

The honest cases:

  1. You want to understand mesh routing by running a node. This is the chapter-12 recommendation for the curious reader. Spin up a Yggdrasil node on a VPS, connect it to public peers, browse the in-network services, watch the routing table evolve. It will give you a tangible feel for how a self-organizing overlay actually behaves that the LoRa-shaped projects in this book cannot, because they’re either too small to feel the routing properties or too constrained to expose them.

  2. You want a no-central-authority IPv6 network for your own use. Connect a half-dozen of your own machines, peer them with each other and with a few public peers, and you have a flat IPv6 network among your machines that doesn’t depend on any single coordinator service.

  3. You believe in the political project. Some Yggdrasil users are there because they want a network that is structurally outside the BGP-and-tier-1-providers arrangement. The project does not require you to share that belief, but if you do, this is the implementation.

  4. You are building something that needs the routing property. Distributed services where any-to-any reachability matters more than peak throughput. Some IPFS-shaped, some federated-services-shaped, some research deployments.

If none of those describe you, Yggdrasil is interesting but probably not load-bearing for your use case. Tailscale (homelab) or Reticulum (off-grid) is the better pick.

License and project status

Yggdrasil is under the LGPLv3. The reference implementation is Go, in github.com/yggdrasil-network/yggdrasil-go. The Android client is github.com/yggdrasil-network/yggdrasil-android. The project is governed by a small core team plus a broader contributor base; commits and releases are regular as of 2026.

What to take from this chapter

You should now be able to:

  • Install Yggdrasil on a Linux machine, peer it with the public network, and reach an in-network service.
  • Explain how spanning-tree routing keeps per-node state bounded as the network grows, and what that costs (path suboptimality).
  • Position Yggdrasil correctly in the design space: it is not a LoRa-mesh competitor, it is not a Tailscale competitor, it is its own thing — a self-organizing IPv6 overlay where the appeal is the no-central-authority routing.
  • Decide whether you actually want to run a node, or whether you are just curious enough to read about it.

The next chapter is the project Yggdrasil’s lineage came out of, and which is no longer the right place to send a curious reader for hands-on experience: cjdns, Hyperboria, and what the early-2010s mesh-utopia movement left behind.

The Hyperboria Story

This chapter is a memorial as much as a survey. cjdns and the Hyperboria network were, in the early 2010s, the most ambitious open-source mesh-networking project in the world. The vision was beautiful: a globally-routed, end-to-end encrypted, self-organizing IPv6-shaped network with cryptographic identity at the routing layer, growing organically from neighbor-to-neighbor links into a parallel Internet. The project did not pan out the way its proponents hoped. Honoring it requires being honest about that.

For a curious reader in 2026: cjdns is not the right project to install this weekend. The network is mostly historical. Yggdrasil (chapter 7) is what you should run instead, and the lessons cjdns left behind are baked into Yggdrasil and Reticulum. But the story is worth knowing, because the failure modes are instructive and the cultural moment matters.

What cjdns was

cjdns — the “Caleb James DeLisle Network Suite” — was a software router and protocol that started in 2011, written by Caleb James DeLisle, and rapidly attracted a community of mesh-networking enthusiasts who built Hyperboria on top of it. The core technical ideas:

  • Cryptographic identity at the routing layer. Every cjdns node has a Curve25519 public key. Its IPv6 address is derived from a hash of the public key (in the fc00::/8 deprecated-but-pragmatically-useful range). Identities are unforgeable. This was the main thing cjdns got right, and the design has been broadly vindicated by every project that came after.
  • End-to-end encryption by default. Same idea Reticulum and Yggdrasil eventually adopted. cjdns was the first widely-deployed implementation of this in a routing-layer context.
  • DHT-based location lookup. This was the new idea. To route to a destination, cjdns would do a Kademlia-style DHT lookup against the destination’s public-key hash; the DHT returned a path through the network expressed as a sequence of physical-link choices, and the source node would then source-route the packet along that path. The DHT was distributed across all nodes; no single node was authoritative.
  • Source routing with switches at every node. Each cjdns node was both a “switch” (forwarding source-routed packets) and a “router” (originating source-routed packets to destinations it knew via the DHT). The conceptual separation was clean.

The combination — cryptographic identity, encryption by default, DHT-based lookup, source-routed forwarding — was novel for its time and remains technically interesting in 2026 even though the network it built has not.

What Hyperboria was

Hyperboria was the public network that grew on top of cjdns. From roughly 2012 to 2016 it was the most prominent example of a “parallel Internet” project — a network of volunteer-run nodes connected over the public Internet (and, in some places, over local wireless), with services hosted in-network: chat servers, wikis, code forges, the kind of services-by-and-for-the-community infrastructure that the Tildes-and-Pleroma generation now builds on Tor and the Fediverse.

At its peak, Hyperboria had perhaps low-thousands of active nodes globally, a meaningful presence in Eastern European hacker communities, in some North American cities, and online via the EFnet-shaped IRC culture of the 2010s. There were real meetups. There were physical mesh deployments — rooftop antennas, neighborhood mesh nodes, the political-project flavor of mesh networking that motivated some of the cultural moment.

And then, slowly, it faded.

Why it didn’t work

The reasons are several and they compound. None of them is a single fatal flaw; together they were enough.

The DHT was the wrong abstraction for high-churn networks. Kademlia and DHTs in general work well for relatively stable participants — BitTorrent’s tracker-less mode is the canonical example, where peers stay around for hours. cjdns’s network had nodes appearing and disappearing constantly: people’s home machines, hobbyist VPSes, mesh nodes that lost power. The DHT spent a lot of its bandwidth chasing route-lookup answers that had become invalid by the time they returned. Path lookups for a never-before-contacted destination could take noticeable seconds; lookups against a destination whose path had recently changed could fail and require retry. This was livable in good conditions and frustrating in bad ones, and the bad ones were common.

Source routing meant every packet carried its own path. This was elegant and had real engineering virtues — no per-hop routing decision, no re-convergence overhead — but it also meant that a packet’s path was decided at the source and could not adapt to mid-flight changes. If a link in the middle of the path went down, the packet was dropped, and the source had to do another DHT lookup. In high-churn environments, this happened a lot.

Single-developer governance. cjdns was, for much of its life, a one-developer project — DeLisle himself. The work was good and prolific, but the bus factor was one, and the architectural decisions all came through a single channel. When DeLisle’s attention shifted in the mid-2010s (he moved to other projects, including some Bitcoin and PKT work), the network’s development pace slowed, and there was no clear successor leadership.

The community split over priorities. Some Hyperboria participants wanted to build a usable public network with real services. Others wanted to research routing theory. Others wanted to fork the codebase to do specific local-mesh deployments. The project had room for all of these but no shared roadmap, and the energy diffused into multiple smaller efforts rather than coalescing into a single sustained push.

Yggdrasil happened. In 2018, Arceliar (a long-time cjdns contributor) and Neil Alexander forked away from cjdns to build Yggdrasil, taking the parts they thought worked (cryptographic identity, end-to-end encryption, IPv6-overlay shape) and replacing the parts they thought didn’t (the DHT-based location lookup, source routing). Yggdrasil’s spanning-tree approach is, in significant part, a deliberate response to what they had learned didn’t work in cjdns. Many of the technically engaged cjdns users moved to Yggdrasil over 2019–2022, and the cjdns network has been smaller since.

By the early 2020s, cjdns and Hyperboria had become a residual community. The network exists in 2026; nodes are reachable; some of the original infrastructure still runs. But the active community has largely either migrated to Yggdrasil or moved on, and recommending a curious reader install cjdns to “try mesh networking” would point them at a quiet network where the response time on questions is poor and the in-network services are mostly stale.

What’s still there in 2026

The cjdns codebase is still maintained, in a maintenance sense — security fixes happen, the build still works on current systems, there are still a small number of contributors. The Hyperboria network is still up; you can join it; you can find a few in-network services, an IRC server or two, some long-running personal pages. The Reddit and IRC communities are quiet but present. None of this is nothing.

But this is the wrong place to send a reader who wants to feel mesh networking. The hands-on experience of joining cjdns in 2026 is, mostly, the experience of joining a network of mostly-empty rooms. The technical interest is real and the historical interest is real and the “this is what a 2014 mesh-utopia project felt like” interest is real. The “this is a fun thing I’m going to play with this weekend” interest is not really there.

If you are reading this chapter to decide whether to install cjdns: don’t, unless you specifically want to study the codebase or pay respects to the cultural moment. Install Yggdrasil instead. It is what cjdns was trying to be, with five years of additional engineering and an active community.

What it left behind

The lessons cjdns left to the field are real, and the projects that built on those lessons are alive in 2026:

  • Cryptographic identity as the routing-layer address. This is now a commonplace design choice. Reticulum, Yggdrasil, and even mesh VPNs (in the WireGuard-key-as-identity sense) all do it. cjdns was an early adopter and the design has been vindicated.
  • End-to-end encryption as default, not opt-in. Now table-stakes for any new mesh project being taken seriously.
  • The IPv6-overlay shape. The pattern of “give every node an IPv6 address derived from its public key, present a TUN interface to the OS, route invisibly underneath” comes from this lineage and lives on in Yggdrasil.
  • What does not work: DHT-based location lookup in high-churn networks. This is the negative result, and it is genuinely valuable. The field has moved on from DHT-as-routing-layer toward path-discovery-with-caching (Reticulum) or spanning-tree (Yggdrasil) precisely because cjdns demonstrated, at scale, that DHT lookups were the wrong tool for a routing decision that had to happen at packet send time.
  • What does not work: source routing in dynamic networks. Same lesson.
  • What does not work: single-maintainer governance for a globally-scaled network. Yggdrasil and Reticulum have learned from this, with varying degrees of success — Reticulum is still effectively single-maintainer, with the same risk; Yggdrasil has a small but real core team. The lesson is acknowledged; the solution is partial.

How to honor a project that didn’t work

There is a kind of writing about open-source projects that did not pan out where the author leans hard on euphemism — “the project explored interesting territory,” “the community has a smaller but dedicated presence,” “the codebase is a wonderful learning resource.” This is well-meaning and it is, at the margin, dishonest. It is also unhelpful to a reader trying to decide where to spend their time.

The straight version is: cjdns was a serious, ambitious, technically interesting project that did not become the network its founders hoped it would. The reasons it did not are partly engineering, partly community dynamics, partly the timing of when other approaches matured. The work was good. The lessons survived. The network did not.

That is okay. Most ambitious projects do not become what their founders hoped. The honest accounting of why is more useful to the field than a politely-vague summary that leaves the next ambitious project’s founders with no way to learn from the failure modes.

Caleb James DeLisle, Arceliar, Neil Alexander, the Hyperboria community: thank you for the work. The next generation of mesh-networking projects sits on the shoulders of what you built and what you learned.

What to take from this chapter

You should now be able to:

  • Recognize cjdns and Hyperboria as a foundational project in mesh networking’s open-source history without confusing it for a project to install today.
  • Explain why DHT-based location lookup, beautiful in theory, did not survive contact with high-churn networks in production.
  • Trace the line from cjdns to Yggdrasil and see what was kept, what was discarded, and why.
  • Hold an opinion about how the field should write about projects that didn’t pan out — not with disrespect, but not with euphemism either.

The next four chapters cover networks that are adjacent to mesh networking rather than examples of it strictly. We start with Scuttlebutt and the gossip family.

Scuttlebutt and the Gossip Family

This chapter is about a different shape of network. Scuttlebutt is not a mesh in the routing-protocol sense the rest of this book has been working with. It is not trying to deliver a packet from A to F through a sequence of forwarders, in real time, with no central authority. It is trying to do something else entirely — give every participant an eventually consistent view of a distributed log of messages, by gossiping with whoever they can reach when they can reach them.

It is mesh-adjacent rather than mesh-proper. We include it because (a) the project is alive, (b) the design choices are instructive as a contrast to the routing-protocol family, and (c) for a non-trivial set of use cases, gossip is actually the right answer and the routing-protocol designs are the wrong answer.

What it is

Scuttlebutt — formally Secure Scuttlebutt or SSB — is a peer-to-peer protocol for replicating append-only logs of signed messages across a network of participants. Each participant maintains their own log; that log is replicated to friends, who replicate it to their friends, and over time everyone with a path to you in the social graph ends up with a copy of your log. The data flows asynchronously, opportunistically, and is happy to flow over sneakernet (a USB drive carried between two Pi nodes), local WiFi, the Internet, or any combination. The system makes no attempt to deliver any specific message to any specific recipient at any specific time. It just ensures that eventually, if there is a path, the message arrives.

The properties this gives you:

  • Local-first. Every node has a local copy of the data it cares about. Reading is local. Writing is local. Syncing happens in the background when connectivity exists.
  • Offline-tolerant. A node that goes offline for a week reconnects and catches up; messages from the past week sync at reconnection. There is no notion of “online users” in the way there is in real-time messaging.
  • Cryptographically signed. Every message is signed by its author; logs are append-only and tamper-evident.
  • Social-graph-bounded. You only see messages from people in your social graph, transitively. There is no global feed.

This shape of system is profoundly different from a routing-protocol mesh. Reticulum tries to deliver a packet now by finding a path now. Scuttlebutt doesn’t care whether the packet is delivered now; it cares whether, in aggregate, the social-graph-relevant messages eventually reach you. Different problems, different solutions.

Why this design exists

Scuttlebutt came out of a specific community — initially the Sailing Scuttlebutt liveaboard community, where literal sailors with intermittent Internet on boats wanted a way to keep up with friends without depending on the cloud — and the design reflects the use case. The protocol was designed for environments where the network is not always there. It assumes intermittence as the default operating condition, not an edge case.

This makes it useful for cases that real-time mesh routing protocols are bad at:

  • Distributed social applications where users come and go and “is this person online” is not a load-bearing question.
  • Asynchronous coordination where the relevant unit of communication is a post, not a packet.
  • Sneakernet-tolerant deployments where you sync logs by carrying USB drives between locations.
  • Long-term archival distribution where the goal is “everyone in this community ends up with a copy of these documents.”

It is not useful for:

  • Real-time messaging. SSB messages can take seconds, minutes, or longer to propagate; latency depends on whether your peers happen to be online, and the protocol does not try to optimize for this.
  • Targeted delivery to a specific recipient. SSB is gossip-shaped: messages flow to everyone in the relevant subgraph, not to a specific address. Private messages exist (SSB has encrypted private messages between identities) but they are still gossiped — encrypted-blob-shaped, but in everyone’s logs who is in the propagation path.
  • High-throughput data movement. The protocol is designed for human-scale write rates. It will happily carry your social posts. It will not happily replace your file server.

The state of Scuttlebutt in 2026

Honest accounting:

The Scuttlebutt protocol’s peak adoption was probably around 2019–2021, when the Manyverse client made the network accessible from a phone and the broader “we’re tired of centralized social media” wave of the late 2010s gave it cultural energy. The network was not large in absolute terms — peak active-user counts in the tens of thousands range — but it was unusually engaged, with real communities forming around shared interests and a culture that took the local-first social vision seriously.

Since around 2022, the network has been smaller. Some of the user attrition was natural: people who liked the idea but didn’t sustain the habit. Some was ecosystem fragmentation: experiments with different SSB-protocol variants (the original “ssb-classic” log format, an evolved “ssb-bendy” format, and a divergent “p2panda” project that took some of the same ideas in a different direction) splintered the community. Some was the maturation of the broader fediverse — Mastodon and ActivityPub absorbed a lot of the “decentralized social media” energy, and ActivityPub’s almost-local-first, almost-peer-to-peer compromise was easier to onboard onto.

Where things stand in 2026: Manyverse keeps the client side alive. The Manyverse Android and desktop applications continue to be maintained and remain a usable way to access the network. The network is functional. New posts arrive, friends still gossip, the basic operations work. But it is a smaller community than its 2020 peak, and a reader showing up in 2026 will find a network where the response time on a new post is measured in minutes-to-hours rather than seconds, and where some of the once-active feeds are now mostly silent.

This is not the cjdns-and-Hyperboria pattern, where the network is essentially historical. Scuttlebutt in 2026 is genuinely active, just smaller than it once was. A reader who is interested in the protocol shape, in distributed social applications, in offline-tolerant gossip — there is real value in installing Manyverse, joining a few pubs (the gateway nodes that bootstrap new users into the network), following the active feeds, and seeing how the system behaves.

A worked example: setting up an SSB node

The most accessible way to see Scuttlebutt in 2026:

  1. Install Manyverse on your phone (Android or iOS) or desktop (the project ships a desktop version too).
  2. Generate an identity. The app does this for you — it generates an Ed25519 keypair and your SSB ID is the public key, base64-encoded with an @ prefix.
  3. Connect to a pub. A pub is a long-running server that participates in the SSB network and helps new users bootstrap by gossiping with them and onboarding them into the broader social graph. The Manyverse client comes with a list of public pubs; pick one and request an invite (most have automated systems for this).
  4. Follow some people. Manyverse will discover what other identities the people you follow follow, and gradually your local copy of the network grows.
  5. Post something. The post is signed, appended to your local log, and gossiped to your peers next time they sync.

At this point you have a working SSB node, and the basic phenomena of the network — eventual consistency, social-graph-scoped visibility, asynchronous sync — are visible to you in everyday use.

What gossip protocols teach about routing protocols

The structural lesson here, the one this chapter exists to deliver: gossip and routing are different tools for different problems, and conflating them produces bad designs in both directions.

A routing protocol’s job is to deliver this packet to that destination, ideally now, ideally over a path that is short or cheap or both. The protocol must do work to discover the path and keep it valid. The cost is overhead and the benefit is targeted, low-latency delivery.

A gossip protocol’s job is to ensure that, eventually, every participant in the relevant scope sees every relevant message. The protocol does not care which specific pair of nodes a message travels between, in what order, or with what latency, as long as it eventually reaches everyone. The cost is high redundancy and unbounded latency; the benefit is robustness to topology change and tolerance for arbitrary connectivity patterns.

A network that wants to deliver “alice messages bob in real time, low latency, addressed delivery” should use a routing protocol. A network that wants to deliver “everyone in this community sees all the posts in this community, eventually” should use gossip. Trying to use a routing protocol for the gossip-shaped problem produces a system that is brittle in poor connectivity. Trying to use a gossip protocol for the routing-shaped problem produces a system that is wasteful and slow.

Some applications need both, and the best designs in this space recognize that and use each tool for the job it is good at. Reticulum, for example, has channels (routing-protocol-shaped, real-time messaging) and an explicit gossip mode for slow-substrate broadcast. The two are not in tension; they are complementary.

Where to go next

  • Manyverse at manyver.se is the entry-point client.
  • The Scuttlebutt protocol guide at scuttlebot.io/more/protocols/secure-scuttlebutt.html is dated but still the canonical protocol description.
  • Manyverse community pubs. The Manyverse documentation maintains a list; pick one and request an invite.
  • p2panda at p2panda.org is the spiritual successor experiment that took some of SSB’s ideas in different directions; if you are interested in where this lineage is going next, that is one place to look.

What to take from this chapter

You should now be able to:

  • Explain why Scuttlebutt is not a mesh in the routing-protocol sense, and why it is mesh-adjacent in a way worth understanding.
  • Recognize gossip as a distinct tool from routing — useful for different problems, with different cost structures, and not interchangeable.
  • Decide whether the SSB shape is right for your use case (asynchronous, social-graph-scoped, eventually-consistent) or wrong for it (real-time, addressed, latency-sensitive).
  • Have an opinion about whether SSB is alive enough in 2026 to be worth your time. The honest answer: yes for the protocol-curious, possibly not for the user looking for a daily-driver social network, but worth a weekend either way.

The next chapter shifts again, to the messaging-only mesh family — Briar, Bitchat, and Bluetooth-mesh applications.

Briar, Bitchat, and the Bluetooth Family

The previous chapters were about networks designed to span distances — kilometers in Meshtastic’s case, the world in Reticulum’s and Yggdrasil’s. This chapter is about networks designed to span rooms. When the problem you actually have is “two people in the same building need to communicate without infrastructure” — which is the problem at protests, in disaster zones, in places where cellular networks are down or untrusted, in subway tunnels, in dense urban environments where the next person might be 30 meters away but might also be 30 floors up — the right tool is not a LoRa mesh or an IPv6 overlay. It is short-range Bluetooth-mesh messaging.

This is a small-scope but real category. The two main projects in 2026 are Briar and Bitchat, and they make different tradeoffs that are worth seeing in contrast.

The shape of the problem

Bluetooth-mesh messaging exists because of one observation: every smartphone has Bluetooth on already, and Bluetooth (especially BLE) can do peer-to-peer discovery and message exchange without any infrastructure. If two phones are within ~10 meters of each other, they can talk. If three phones form a chain, the message can hop from end to end. If a hundred phones are in the same building, you have a small mesh that requires no carrier, no AP, no LoRa hardware — just the phones already in everyone’s pocket.

The constraints are real:

  • Range is limited. 10–15 meters indoors through one wall, 20–50 meters outdoors line-of-sight. BLE long-range coding (Coded PHY) can extend this somewhat, but you’re still in tens-of-meters territory, not kilometers. Going outside the building means leaving the network.
  • Battery cost is real. Continuous Bluetooth scanning and advertising is not free. Apps in this category trade off discovery aggressiveness against battery; in heavy use you can expect double-digit-percent additional battery drain on a typical smartphone.
  • Throughput is modest. BLE’s effective application-layer throughput is in the tens-to-low-hundreds of kilobits per second under good conditions, much less in mesh-shaped traffic. Plenty for messaging and small attachments; useless for video.
  • iOS Bluetooth APIs are restrictive. Background BLE on iOS is tightly limited and apps in this space have to make compromises (foreground operation, manual triggering, paired-vs-unpaired tradeoffs) that the Android side doesn’t have. Cross-platform Bluetooth-mesh apps generally work better on Android.

The good news is that within the constraints, the experience can be quite usable. Sending a text message to someone across a crowded venue, with no carrier, no Wi-Fi, no LoRa hardware — that works, and it works in a way that no other category of project in this book can match because no other category of project has the user already carrying the radio in their pocket.

Briar

Briar is the older and more carefully-engineered of the two. It is an Android (and recently desktop, beta) messaging application that supports communication over Bluetooth, local Wi-Fi, and Tor — choosing whichever transport is available, transparent to the user. The project has been quietly ongoing since around 2014, with steady releases, a small but consistent contributor base, and a security model designed for users with serious threat models.

Key design choices:

  • End-to-end encryption with forward secrecy. Briar uses BTP (the Briar Transport Protocol), a custom protocol with strong security properties: messages are end-to-end encrypted, the protocol provides forward secrecy, and metadata leakage is minimized.
  • No central server. Briar contacts are exchanged directly between users (in person via QR code, or remotely via a trusted introducer). There is no Briar.com server running an account directory.
  • Tor as an Internet transport. When Bluetooth and local Wi-Fi aren’t available, Briar can route messages over Tor to other Briar users who are also reachable via Tor. This gives the project its core value proposition: a single app where messages travel over Bluetooth when you’re nearby, local Wi-Fi when you’re on the same network, and Tor when you’re far apart — all without the user having to choose.
  • Forum and blog primitives. Beyond direct messaging, Briar has group conversations and a forum-shaped feature where members can post threaded discussions. This is unusual for a Bluetooth-mesh app and reflects the project’s broader vision: not just “messaging when infrastructure is down,” but “a self-contained social-and-coordination platform that works without central infrastructure.”

Briar’s threat model is genuinely serious. The project documentation explicitly addresses adversaries with the ability to monitor network traffic, run network infrastructure, and so on. It has been used in real adversarial contexts — journalists working under regimes hostile to a free press, activists in countries where messaging apps are surveilled. The cryptographic design has been informed by academic work and the project has had reviews (though, like Reticulum, no large-scale formal audit at the time of writing).

What Briar gets wrong, or is honestly limited about:

  • iOS support. As of 2026, Briar’s iOS support remains limited compared to Android. The Bluetooth-mesh limitations on iOS make it hard to provide the same experience.
  • Setup friction. The “exchange contacts via QR code in person” step is the right answer for the threat model, but it’s not what most users expect from a messaging app, and it limits adoption.
  • Smaller community. Real users measure in tens of thousands globally, not millions. Most messaging apps in your contact list will be Signal, WhatsApp, or iMessage; almost none will be Briar.
  • Throughput is modest and the UX reflects that. Messages can take seconds to deliver even between adjacent nodes. This is the protocol working correctly, not a bug, but it is not the always-instant feel of carrier-mediated messaging.

License: GPLv3. Repository: code.briarproject.org/briar/briar. The project is alive in 2026, with regular releases, though development pace is steady-not-fast.

Bitchat

Bitchat is the more recent (early 2020s) entrant in this space, focused specifically on Bluetooth-mesh messaging without the Tor and broader-platform ambitions of Briar. It is, in essence, a more focused take on the same problem: an open-source app that turns smartphones into a Bluetooth-mesh, prioritizing simplicity and the immediate “send a message at a protest” use case.

Compared to Briar:

  • Narrower scope. Bluetooth-mesh only; no Tor, no Wi-Fi-LAN transport, no forum primitives. Just messages over BT-mesh.
  • Easier to onboard. Without the in-person contact-exchange model, a user can install and start using Bitchat with less friction. The tradeoff is a less rigorous trust model.
  • Cross-platform from the start. iOS and Android both supported, with the iOS-side limitations the platform forces on the project.
  • Smaller team and project than Briar. Real but small.

The honest positioning: Bitchat is the easier door for a curious user to walk through; Briar is the right door for a serious user with a specific threat model. Both are alive in 2026. Both have specific use cases where they’re the right answer.

Bluetooth Mesh (the standard) and the broader BT-mesh space

For completeness: there is an actual Bluetooth Mesh specification, ratified by the Bluetooth SIG, designed primarily for IoT use cases (lighting, smart-building, sensor networks). It is a real protocol with real deployments, mostly invisible to consumers and unrelated to the messaging-app category this chapter is about. Apps like Briar and Bitchat use Bluetooth (specifically BLE) but generally implement their own meshing on top of it rather than using the Bluetooth Mesh specification, because the spec’s design doesn’t fit the messaging use case very well.

If you encounter a smart bulb, a door lock, or an industrial sensor that talks “Bluetooth Mesh,” that is the spec. The mesh-messaging apps are doing something different over the same radio, and the two should not be confused.

When to actually pick one of these

The honest cases:

  1. You expect to need messaging in environments without infrastructure. Protests, festivals, disaster scenarios, dense urban environments where cellular is unreliable. Briar or Bitchat installed in advance is real preparation; neither is useful as a thing you install in the moment of crisis.
  2. You have a serious threat model. Briar, specifically, with the in-person contact-exchange ritual, is the right tool. Most readers of this book do not have this threat model, but the ones who do should know about Briar.
  3. You’re curious about Bluetooth-mesh and want to feel it. Install Bitchat on two phones in the same room. Send messages. Walk apart until the signal drops. The phenomenon is interesting and the install cost is small.

If none of those describe you, this category is probably not load-bearing for your use case. It is worth knowing exists.

What this category teaches

The structural lesson here, for a book that has been mostly about long-distance mesh networking, is this: physical layer choice is not just about “how far can the network reach.” It is about who can join.

Meshtastic requires LoRa hardware. Reticulum requires deliberate setup, often hardware. Yggdrasil requires an Internet connection. Bluetooth-mesh requires only a phone that is already in everyone’s pocket. That difference — the absolute zero-marginal-cost participation barrier — is what makes Bluetooth-mesh applications viable for use cases like protests where every additional hour of setup is an hour the participants don’t have.

The cost is range. The benefit is reach measured in people who can join right now, which is a different and sometimes more important metric than range measured in kilometers.

A complete mesh-networking stack for serious use ought to have something in this category and something in the LoRa-mesh category, used for different purposes: Bluetooth-mesh for the rooms-full-of-people scenario, LoRa for the people-on-different-hilltops scenario, with bridges between them where it makes sense (Reticulum is again the project that takes this seriously).

Where to go next

  • Briar at briarproject.org — the project page, install link, threat model documentation.
  • Bitchat — the project repository at github.com/permissionlesstech/bitchat (verify before relying on for a long-term plan; sub-projects in this space have moved repos historically).
  • Bluetooth Mesh specification — Bluetooth SIG, mostly relevant if you’re working on IoT rather than messaging. Linked from bluetooth.com.

What to take from this chapter

You should now be able to:

  • Recognize Bluetooth-mesh messaging as a different category of project from LoRa-mesh and IP-overlay-mesh, with different strengths and weaknesses.
  • Decide between Briar (serious threat model, broader platform integration via Tor) and Bitchat (lighter-weight, BT-only, easier onboarding).
  • Articulate the structural reason this category exists: when the radio in everyone’s pocket is the substrate, the network can have orders of magnitude more participants than any LoRa or VPN-based alternative.
  • Name when this category is the right tool (rooms-full-of-people scenarios, infrastructure-down events) and when it isn’t (regional networks, real-time low-latency comms across distances).

The next chapter is the last category: mesh VPNs. Tailscale, Nebula, ZeroTier, Headscale. The tools you might already use at work, and what makes them legitimately mesh-shaped without being mesh networking in the same sense as the rest of this book.

Mesh VPNs Are A Different Thing

This is the chapter many readers will recognize most immediately, because Tailscale is on their work laptop. Mesh VPNs are the category most working engineers in 2026 already use — knowingly or otherwise — and that they probably call “mesh” without having thought hard about whether the word means the same thing it means for Reticulum or Yggdrasil.

It does not, quite. The word mesh is doing real work here, but for a different problem than the previous chapters were about. This chapter unpacks that, walks through the four major projects (Tailscale, Nebula, ZeroTier, Headscale), and is honest about when each is the right tool.

What mesh VPNs are solving

The classical VPN model is a hub-and-spoke. Your laptop opens a VPN connection to a corporate gateway. All traffic from the laptop to other office resources flows through that gateway, even if the destination is another laptop on the same VPN. This is operationally simple and architecturally lousy: the gateway is a bandwidth bottleneck, a latency bottleneck, and a single point of failure. If you and a colleague on the same VPN want to share a file, the bytes leave your machine, traverse the Internet to the corporate gateway, traverse it back to your colleague’s machine — even if the colleague is sitting next to you.

The mesh-VPN model says: every node should be able to connect directly to every other node. The control plane (key distribution, ACLs, identity, peer discovery) is centralized — there is a coordinator, somewhere, that helps nodes find each other and decides who is allowed to talk to whom. The data plane — actual user traffic — flows directly between peers, peer-to-peer, with no central relay involved unless NAT traversal forces one.

The traffic shape is mesh-like (point-to-point between any pair of nodes, no central forwarder for the data). The control plane is centralized (a coordinator service decides who’s allowed in the network). The substrate is the public Internet (no LoRa, no Bluetooth, no off-grid). Hence: mesh VPN. Mesh in the same sense Reticulum is — peer-to-peer data flow with no central relay — and not in the same sense — substrate is the regular Internet, control plane has a coordinator.

The structural test from chapter 1: does the network function when the public Internet is unavailable? For mesh VPNs, no — that is what makes them a different category from Reticulum and Meshtastic. They are solving zero-trust networking over the existing Internet, not networking that doesn’t depend on the existing Internet.

Both categories legitimately use the word mesh. Both deserve the word. They are not the same category. Keep them separate in your head.

Tailscale

Tailscale is the dominant project in this category in 2026 by a wide margin. The founding team came from previous work on Wireguard and Google’s enterprise networking, and the project’s design choices reflect that lineage.

The architecture in summary:

  • WireGuard for the data plane. Tailscale doesn’t reinvent the encryption layer. WireGuard is a tight, well-audited, modern VPN protocol; Tailscale uses it for actual peer-to-peer traffic.
  • A coordinator service for the control plane. Each Tailscale node connects to a Tailscale-operated coordinator (in the hosted product) which handles key distribution, peer discovery, ACL enforcement, and so on. The coordinator does not see actual user traffic — it sees only the metadata necessary to set up the WireGuard sessions.
  • DERP relays for fallback. When two nodes can’t establish a direct WireGuard connection (because of NAT, firewall, or other obstructions), Tailscale falls back to relaying their traffic through DERP relays. The relays still don’t see plaintext — the WireGuard encryption is end-to-end — but they do carry the encrypted bytes. In practice, the project reports the large majority of connections succeed peer-to-peer, with DERP only as fallback.
  • Tailnet identity model. Every node belongs to a tailnet (typically one per organization or per user), with ACLs that determine which nodes can reach which others. Identity is via OAuth-shaped logins (Google, GitHub, Microsoft, etc.).

What Tailscale gets right:

  • The setup is essentially zero-friction. Install the client, log in via OAuth, the node appears in your tailnet and can reach every other node it’s allowed to. This is the experience that has driven the project’s adoption: it’s easier to set up than it is to not set up, for the homelab and small-business case.
  • NAT traversal is genuinely well-engineered. Tailscale’s NAT-traversal logic is some of the best in the industry. It handles symmetric NATs, double NATs, CGNATs, and the various other obstructions that have historically made peer-to-peer connectivity hard.
  • The free tier is real. Personal use up to a certain number of nodes is free, indefinitely. This has driven a lot of homelab adoption.
  • MagicDNS, exit nodes, subnet routing. The product features around the core mesh have been thoughtfully developed: every node gets a memorable DNS name in the tailnet, you can designate exit nodes (for routing all your traffic out a specific node), you can advertise subnets (for reaching non-Tailscale resources behind a Tailscale node).

What Tailscale gets wrong, or is honestly limited about:

  • The coordinator is a centralization point. It is run by Tailscale Inc. If they go down, your existing peer connections continue to work, but new connections and key rotations don’t. If the company changes its policies or pricing in ways you don’t like, your options are limited. Headscale (below) addresses this.
  • It is a SaaS product. Tailscale is an open-source client connecting to a proprietary coordinator. The client is open. The control plane is not. For users who need full control of the stack, this is a real limitation.
  • Pricing for serious commercial use is non-trivial. Free for personal, paid above; the paid tiers can be expensive for large organizations. This is fair (the company has to make money) but it is also a thing to be aware of.

Where Tailscale fits in the recommendation tree (chapter 12 has the explicit version): for almost any “I want a flat network among my devices, including some that are behind NAT” use case, Tailscale is the right answer in 2026. Homelab, small business, even some enterprise. If you don’t have a specific reason to pick something else, pick Tailscale.

Headscale

Headscale is the open-source, self-hosted reimplementation of Tailscale’s coordinator. The Tailscale clients (which are open-source) talk to Headscale instead of to Tailscale’s hosted coordinator; everything else works the same.

This solves the “the coordinator is a SaaS centralization point” concern at the cost of operating the coordinator yourself. Headscale runs as a Go binary against a SQLite or PostgreSQL database; the operational overhead is real but not prohibitive. The project is alive in 2026, has a healthy contributor community, and is the standard way to run Tailscale-shaped infrastructure without depending on Tailscale Inc.

When to pick Headscale: when you specifically want full control of the control plane, can operate the service yourself, and are comfortable being responsible for the security and uptime of that service. For most users, the hosted Tailscale product is the right tool. For users who specifically need self-hosting, Headscale is the answer.

Nebula

Nebula is Slack’s open-source mesh-VPN project, with a different design philosophy from Tailscale. The architectural choices:

  • Lighthouses, not coordinators. Nebula uses lighthouse nodes that help other nodes discover each other, but the lighthouses are operated by the user, not by a third-party SaaS. You designate one or more nodes (typically with public IPs) as lighthouses; other nodes register with them.
  • Certificate-based identity. Nebula uses a PKI-shaped identity system where you have a CA certificate and issue node certificates from it. This is more rigorous than OAuth-based identity but more operational overhead.
  • Custom protocol, not WireGuard. Nebula doesn’t use WireGuard; it has its own UDP-based protocol with similar properties (modern crypto, designed for peer-to-peer).
  • Designed for larger-scale deployments. Slack uses Nebula internally to connect tens of thousands of nodes; the project’s design choices reflect that scale.

Nebula is more in the self-hosting direction than Headscale, by design. There’s no “managed Nebula” SaaS in the way there’s a Tailscale SaaS. You operate everything. The benefit is full control; the cost is operational complexity.

When to pick Nebula: when you want a self-hosted mesh VPN with a more rigorous identity model than the OAuth-based one, when you’re operating at a scale where the Tailscale pricing or design isn’t a fit, or when you specifically prefer a certificate-based identity model.

ZeroTier

ZeroTier predates Tailscale and approaches the problem differently. The core ZeroTier model is a virtual Ethernet: you create a network in ZeroTier, every node that joins gets a virtual Ethernet interface in that network, and Ethernet frames flow between the nodes as if they were on the same LAN.

This is operating at OSI layer 2, where Tailscale operates at layer 3. The consequence is that ZeroTier networks behave like Ethernet — broadcasts work, ARP works, mDNS works, things that depend on layer-2 visibility work — in ways that Tailscale networks don’t.

ZeroTier has a SaaS coordinator (ZeroTier Central) and an open-source self-hostable controller (the same way Tailscale and Headscale relate). The free tier is real. The pricing for paid use is similar to Tailscale’s in shape.

When to pick ZeroTier: when you specifically need layer-2 mesh behavior. For most modern use cases, layer 3 is sufficient and Tailscale is the better choice; for legacy applications, AppleTalk-shaped requirements, or specific industrial uses, ZeroTier’s layer-2 model is the right tool.

What “mesh” is doing in this category

Stepping back: the word mesh in mesh-VPN context is doing two specific things, and it is worth seeing them clearly.

  1. The traffic flows peer-to-peer. No central relay in the data path (modulo NAT-traversal fallback). This is real, it is structurally important, and it is what distinguishes mesh VPNs from classical hub-and-spoke VPNs. The word mesh is correct here.

  2. The control plane is centralized. Tailscale’s coordinator. ZeroTier Central. Even Nebula’s lighthouses are nodes you specifically designate. This is not mesh in the sense of chapter 7’s Yggdrasil or chapter 6’s Reticulum, where there is no central authority at all. The word mesh is partially misleading here.

A more precise word for the category might be hybrid mesh — peer-to-peer data, centralized control. The industry doesn’t use that term, so we’re stuck with the ambiguous mesh VPN. As long as you keep the structural distinction in mind, you’ll know what people mean when they use the word.

How this category interacts with the rest of the book

A reader of this book might run both — a Tailscale network for their homelab and a Reticulum network for off-grid messaging — without contradiction. They are not competing tools; they are tools for different problems.

A productive way to think about it:

  • Tailscale (and the family) is the right tool when the existing Internet is the substrate you want to use, and you want a flat, encrypted, peer-to-peer overlay on top of it.
  • Reticulum, Yggdrasil, Meshtastic are the right tools when you do not want to depend on the existing Internet, either for off-grid use, for resilience, or for political reasons.

Both legitimately use the word mesh. Both are described in this book. Both are useful. Conflating them is the most common confusion in this space, and the goal of this chapter is to leave you not making that mistake.

A worked example: Tailscale in five minutes

Tailscale is, by a wide margin, the easiest project in this book to install and use. The shape of the experience:

# Linux (Debian/Ubuntu)
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
# Click the OAuth link printed in the terminal, authenticate
# That's it. Your machine has a Tailscale IP.

# Now, on your phone (after installing the app, OAuth login)
ssh user@your-laptop-tailnet-name
# This works. Without port forwarding. Without DDNS. Without an IP.

If you have not done this and you are a working engineer in 2026, you are part of a smaller and smaller set of working engineers. The product has the kind of adoption that comes from being unambiguously easier than the alternative.

For Headscale, the equivalent is: install Headscale on a server, point your Tailscale clients at your Headscale instance via the --login-server flag, run the same command. The clients don’t know the difference.

Where to go next

  • Tailscale at tailscale.com. Documentation is excellent.
  • Headscale at github.com/juanfont/headscale.
  • Nebula at github.com/slackhq/nebula.
  • ZeroTier at zerotier.com and github.com/zerotier.
  • WireGuard itself at wireguard.com if you want to understand the underlying protocol Tailscale builds on.

What to take from this chapter

You should now be able to:

  • Position mesh VPNs correctly: peer-to-peer data plane, centralized control plane, public-Internet substrate.
  • Explain why this is legitimately mesh in one sense and not in another, and not be confused when both senses come up in the same conversation.
  • Pick between Tailscale (the default), Headscale (self-hosted control plane), Nebula (certificate-based, large-scale), and ZeroTier (layer-2 virtual Ethernet) based on what you actually need.
  • Stop being confused about why a book that surveys both Reticulum and Tailscale is not a category error.

The final chapter is the recommendations one. Indexed by what you want to feel, what to install this weekend, what hardware to buy, and what an evening with each looks like.

Pick One This Weekend

Eleven chapters of survey. One chapter of recommendations. This is the one where you decide what to actually install, and the goal is that you walk away knowing exactly what you’re going to do, what hardware you need, what an evening looks like, and what you expect to feel when it’s running.

The recommendations are indexed by goal, because the right project is genuinely different for different goals. Pick the row that matches yours.

“I want to send messages off-grid with friends”

Install: Meshtastic.

This is what Meshtastic is for. It is the entry door for a reason.

Hardware to buy:

  • Two Heltec LoRa V3 boards. ~$20–25 each. Make sure to pick the right frequency variant for your region (US-915 for North America/Australia/NZ, EU-868 for Europe, AS-923 for Japan/Korea, and so on; chapter 2 has the full table).
  • Two matching antennas. ~$5–10 each. The antenna matters more than people realize. A bad antenna will halve your range.
  • Two USB-C cables (probably already in your drawer).
  • Optional: two 2000 mAh LiPo batteries (~$8–12 each) and 3D-printed cases.

Total: $80–150 for a complete two-node setup.

What an evening looks like:

  1. Order the hardware Monday. It arrives Wednesday or Thursday.
  2. Friday evening: open the boxes, plug board #1 into your computer.
  3. Visit flasher.meshtastic.org in Chrome or Edge. Follow the flashing flow. Five minutes.
  4. Install the Meshtastic app on your phone. Pair to board #1 over Bluetooth. Set the region. Pick a channel.
  5. Repeat for board #2 with a friend’s phone.
  6. Send your first message. Feel the small, irrational delight of “this is going over a radio with no Internet involved.”
  7. Take one board outside, walk down the street with it, watch the signal degrade with distance and walls. Get a sense of what 250 bit/s of LoRa actually feels like.
  8. The next morning: drive ten minutes apart with the two boards. Test whether you can still talk. The answer will probably be yes.

What you’ll learn:

  • That LoRa range numbers in the marketing copy are real but conditional. Line of sight matters. Antennas matter. Trees matter.
  • That a two-node mesh is small but not nothing. Three nodes start to feel like a network.
  • That the OLED screen on a Heltec V3 is, surprisingly, a satisfying thing to glance at when a packet arrives.

What you won’t get out of it:

  • A real understanding of routing. Meshtastic floods. You won’t see a routing decision until you have many nodes. Chapter 3 is what you read for that.
  • Production-grade encryption. The threat model is “casual eavesdropper,” not “nation-state adversary.”
  • A scalable network. The 100-node-ceiling is real. For small-group use, this doesn’t matter.

“I want to understand mesh routing by running a node”

Install: Yggdrasil on a VPS.

Yggdrasil is the project that lets you see routing happening in a network with enough scale to make the property visible, with enough engineering to make the experience pleasant, and with low enough barrier to entry that you can do it from a cheap VPS.

Hardware to buy: None. You need a $5/month VPS — Hetzner, DigitalOcean, OVH, Vultr, your favorite.

What an evening looks like:

  1. Spin up a small Linux VPS. Ubuntu 22.04 or 24.04 is fine.
  2. apt install yggdrasil (it’s in the standard repos in 2026).
  3. yggdrasil -genconf > /etc/yggdrasil.conf. Edit the config to add three or four public peers from github.com/yggdrasil-network/public-peers. Pick geographically diverse ones.
  4. systemctl start yggdrasil.
  5. ip addr show ygg0 — your Yggdrasil IPv6 address.
  6. ping6 200:6e7c:5f9c:... against an in-network service from the public peers list. Watch it work.
  7. While the network is bootstrapping, read the Yggdrasil whitepaper and the routing-protocol section of chapter 3 of this book.
  8. Run yggdrasilctl getPeers periodically. Watch new peers appear as the network discovers you.
  9. The next day: try running traceroute6 from your node to a few in-network destinations. See what the spanning-tree paths actually look like.

What you’ll learn:

  • What spanning-tree routing looks like in production. The fact that two nodes in the same city sometimes route through a third country, and why that’s the protocol working as designed.
  • What it feels like to be a node in a self-organizing IPv6 overlay. The network exists. You are now part of it. There is no central authority. This is a different feeling than running a Tailscale node.
  • The difference between path-discovery latency on first contact (slow) and steady-state forwarding (fast).

What you won’t get out of it:

  • Off-grid mesh. Yggdrasil runs over the public Internet. When the Internet is down, your Yggdrasil network is down.
  • Hardware-in-your-hand mesh. There is no LoRa, no radio, no physical thing on your desk. The mesh is virtual.

“I want to build something that bridges substrates”

Install: Reticulum (RNS) plus an RNode.

Reticulum is the project that rewards investment, and “I want to build something” is the use case it’s best suited for.

Hardware to buy:

  • One RNode. $100 from the project store, or build your own from a LilyGo T-Echo or similar ($40–60) using the published RNode firmware. The pre-built option is recommended unless you specifically enjoy building hardware.
  • A Linux machine to run the RNS daemon. A Raspberry Pi 4 or 5 is ideal; any Linux laptop or VPS works too.

Total: $100–250.

What an evening looks like:

  1. Get RNS running on your machine. pip install rns. Run rnsd once to generate the default config.
  2. Plug in the RNode. Edit ~/.reticulum/config to add an RNodeInterface for it. Restart rnsd.
  3. Add a second interface — a TCPClientInterface to one of the public Reticulum testnet nodes (the project page has the current list).
  4. Add a third interface — AutoInterface for your local WiFi.
  5. Install Sideband on your laptop. Pair it to RNS. Watch your node appear on the network.
  6. Send a message to a known public destination. Watch path discovery happen.
  7. Open ~/.reticulum/storage and look at the path table that’s accumulating.
  8. The next evening: write a 30-line Python script that uses the RNS API to send a packet. Use the worked example in chapter 6 as a template. Make it do something specific to your use case — broadcast a message every minute, listen for a particular destination, whatever.
  9. The weekend after that: design and build the actual application you wanted to build. You will have everything you need.

What you’ll learn:

  • What it feels like to write code against a network stack where cryptographic identity, encryption, and routing are all the stack’s job, not yours.
  • What “transport-agnostic” means in practice when you can take an interface offline and watch the network keep working over the others.
  • Why people who know this space recommend Reticulum for serious work.

What you won’t get out of it:

  • An immediate first-evening payoff. The bootstrap is longer than Meshtastic’s. Plan for a weekend, not an evening.
  • A polished consumer-app experience. Sideband and MeshChat are good, but they’re closer to “good open-source app” than to “Apple-grade product.”

“I want to mesh-VPN my homelab”

Install: Tailscale (or Headscale if you want self-hosting).

This is the chapter-11 category. Mesh VPN. Different problem from the LoRa-shaped projects above.

Hardware to buy: None, beyond the machines you already want to network.

What an evening looks like (Tailscale, hosted):

  1. curl -fsSL https://tailscale.com/install.sh | sh on each machine.
  2. sudo tailscale up on each. OAuth-login the first time.
  3. The machines are now on a flat, encrypted IPv4-and-IPv6 network. They can reach each other by tailnet name.
  4. Install the phone app. Same OAuth login. Your phone is now on the network.
  5. SSH from your phone to your home server. Without port forwarding. Without DDNS. Without an IP. This will, the first time, feel slightly like magic.

Five minutes, no exaggeration.

What an evening looks like (Headscale, self-hosted):

  1. Install Headscale on a server with a public IP. (apt install headscale in 2026, or run from the Go binary.)
  2. Configure with a hostname and TLS.
  3. Set up your first user and pre-auth key.
  4. On each client machine: tailscale up --login-server=https://your-headscale.example.org.
  5. Same mesh, same experience, your control plane.

What you’ll learn:

  • What zero-trust networking actually feels like, after years of reading about it.
  • That NAT traversal, in 2026, is mostly a solved problem if the right people have written the code for you.
  • How much of the “VPN is annoying” experience was specifically a hub-and-spoke-VPN problem rather than an inherent VPN problem.

What you won’t get out of it:

  • Off-grid anything. Tailscale is a public-Internet overlay; it cannot operate without the Internet.
  • Insight into the mesh-routing-protocol design space. Tailscale is structurally a different category from Reticulum and Yggdrasil; do not expect to learn about routing protocols from running it.

“I want to send a message at a protest”

Install: Briar (Android) or Bitchat (Android/iOS).

Bluetooth-mesh messaging. Chapter 10 is the long version. The short version: install in advance, exchange contacts in advance, test in advance. The thing you do not want to do is install one of these for the first time during the event you need it for.

Hardware to buy: None. Just phones.

What an evening looks like:

  1. Install the app on two or three phones (yours and a friend’s, ideally).
  2. Exchange contacts via QR code, in person.
  3. Test sending messages between phones in the same room.
  4. Walk apart. Watch range degrade. Get a sense of where the limits are.
  5. Try with three or four phones in a chain — does B forward A’s message to C when A and C are out of direct range?
  6. Now you know what the tool actually does, and you have it ready for whenever it might matter.

What you’ll learn:

  • How short Bluetooth range really is.
  • That a tool you already practiced with works dramatically better than a tool you’re learning under stress.

A summary table

GoalProjectHardwareCostTime
Off-grid messaging with friendsMeshtastic2 × Heltec LoRa V3 + antennas$80–150An evening
Understand mesh routingYggdrasil$5/mo VPS$5An evening
Build something seriousReticulum + RNodeRNode + Pi or laptop$100–250A weekend
Mesh-VPN a homelabTailscale(existing machines)$05 minutes
Self-hosted mesh-VPNHeadscaleA small VPS$5/moAn hour
Messaging at protestsBriar / Bitchat(your phone)$0An evening

Pick one. Then pick another.

The most useful framing for the working engineer reading this book: pick the goal that matches what you actually want, do the relevant install, then — when the immediate curiosity has been satisfied — pick a second one from the list and do it. The goals are not mutually exclusive. They are tools. Most people who get serious about this space end up running Tailscale for their homelab and a Meshtastic node on their desk and eventually a Reticulum stack for the project they decided to build.

The point is not which one you pick first. The point is to stop reading about mesh networking and start running some of it. The hardware is cheap, the software is free, the time investment is bounded, and at the end of the evening you will have done something that, ten years ago, would have been a research project.

That is the offer this book has been making. Take it.

What to take from this chapter — and from the book

You should now be able to:

  • Pick a project that matches your actual goal, with a clear sense of what hardware to buy, what an evening looks like, and what you’ll learn.
  • Distinguish the four uses of mesh and not be confused by vendor copy.
  • Reason about routing protocols, physical-layer constraints, and the tradeoffs each project is making, well enough to read the project’s own documentation productively.
  • Have an opinion about which projects are alive and worth your time in 2026, which are dormant and should be honored without being installed, and why the difference matters.

Go install one. The hardware is on your desk by next Wednesday if you order tonight. The evening is yours.

Acknowledgments

Thanks to Georgiy Treyvus, the CloudStreet PM who keeps the backlog and who decided this book belonged on it.

License

This book is dedicated to the public domain under CC0 1.0 Universal.

To the extent possible under law, the authors have waived all copyright and related or neighboring rights to this work.

You can copy, modify, distribute, and use the work, even for commercial purposes, all without asking permission. You do not need to credit the authors. You do not need to ask first. The full legal text is in the LICENSE file at the repository root.