Preface
This book started, as many technical books do, with frustration.
Specifically: watching a colleague spend four hours configuring webpack to do something the browser has done natively since 2017. Four hours. For a feature that shipped in Chrome 61, Firefox 60, and Safari 10.1. The developer in question was smart, experienced, and completely unaware that what they were trying to do was already done.
That's not a failure of intelligence. That's a failure of the information environment around us. We've built an ecosystem where the default answer to every web development question is "add a build step," and we've been doing it so long that we've collectively forgotten to ask whether the browser can just... do the thing.
It frequently can.
This is not an anti-tooling screed. There are real reasons build systems exist, and Chapter 11 goes into them honestly. TypeScript is genuinely useful. Tree shaking matters at scale. If you're shipping a production React application to millions of users, you probably want a bundler. This book is not for that moment.
This book is for every other moment. The internal tool that doesn't need to support IE11 because IE11 is dead. The prototype that became production because it worked. The side project where the webpack config is longer than the application code. The team that spent more time on their CI pipeline than their users ever noticed.
For all of those moments: the browser is much more capable than the ecosystem's defaults suggest, and the gap between "what developers think requires a build step" and "what actually requires a build step" has been widening for eight years.
What you'll find here:
Real working code. Every example in this book runs without compilation — you can drop it in a directory, serve it, and it works. Where something is more complex, there's a working repository linked. Nothing here is theoretical.
Honest limits. The zero-build approach has real constraints. Each chapter names them. Where bundling is the right answer, the text says so.
Dry humor. Web development is inherently absurd in ways worth noting. The jokes are in service of the point, not instead of it.
What you need to follow along:
A modern browser (anything released in the last four years), a text editor, and a way to serve static files. python -m http.server works. So does npx serve. So does Caddy, nginx, or any CDN with an S3 bucket behind it. You do not need Node. You do not need npm. You do not need a package.json.
That last sentence will feel strange by the end of Chapter 2 and obvious by the end of Chapter 13. That's the goal.
Let's start at the beginning.
Thanks to Georgiy Treyvus, CloudStreet Product Manager, whose idea this was.
The Build System You Didn't Ask For
A Brief History of Unnecessary Complexity
"We had to bundle because the browser couldn't load modules." — Every tech lead, 2013–2022, about a problem that was solved in 2017.
Let's start with a question nobody was asked: at what point did "write some JavaScript and open it in a browser" become a multi-step process requiring a configuration file, a task runner, a transpiler, a module bundler, a dev server with hot module replacement, a source map generator, and a CI pipeline to run it all in production?
The answer is: gradually, then all at once, and then so completely that we forgot there was ever another way.
1995–2009: The Innocent Years
JavaScript started as a scripting language dropped directly into HTML. <script src="app.js">. The browser loaded it. It ran. If you needed two scripts, you wrote two script tags and hoped they loaded in the right order. This was genuinely annoying — global namespace pollution, load order dependencies, no encapsulation — but it was simple enough that a developer could hold the entire mental model in their head.
The tooling that emerged in this era was modest: CSS minifiers, JavaScript compressors, YUI Compressor (2007). You ran a script, it made your files smaller, you deployed the smaller files. One step. One tool. One problem solved.
2010–2012: The Module Problem Appears
CommonJS arrived with Node.js in 2009. Suddenly JavaScript had a real module system — require() and module.exports. This was great for server-side code. It was completely incompatible with the browser, which didn't have a module system at all.
Developers who wanted modules in the browser had two choices:
- AMD (Asynchronous Module Definition) with RequireJS — wrap every module in a
define()call and let RequireJS load them asynchronously. Syntactically ugly, but it worked. - Build-time bundling — take all your modules and concatenate them into a single file that worked in any browser.
Browserify (2011) made option 2 easy: write CommonJS modules as if you were writing for Node, run browserify, get a browser-compatible bundle. It was clever engineering solving a real problem. The problem was that the browser had no module system.
2013–2016: Complexity Compounds
Then several things happened at once, and they compounded each other into the ecosystem we have today:
React shipped (2013) and brought JSX with it. JSX is not JavaScript. It requires compilation. Babel emerged to handle this, and while it was at it, also transpiled the ES6 features that browsers hadn't implemented yet. Now you had a transpiler in your pipeline.
Webpack arrived (2012, gained traction 2014) and solved the bundling problem with more configurability than anyone needed. Where Browserify was a focused tool, webpack was a platform. You could transform anything into anything. CSS, images, fonts, markdown — all became "modules" that webpack could process. All required configuration.
ES6 shipped in 2015 with classes, arrow functions, destructuring, template literals, and — critically — a native module syntax: import and export. The browser would eventually support this natively. It didn't matter: by the time browsers shipped ES modules in 2017, the ecosystem had fully committed to bundling. The infrastructure existed, the tutorials assumed it, the job postings required it. Bundling had become the default not because the browser couldn't handle modules, but because the ecosystem had organized itself around the assumption that it couldn't.
2017: The Moment That Should Have Changed Everything
In 2017, something important happened and nobody quite noticed.
Chrome 61 shipped native ES module support. Firefox 60 in 2018. Safari had already shipped it in 10.1 (2017). Edge 16, also 2017.
<!-- This works. In every modern browser. Has for years. -->
<script type="module" src="./app.js"></script>
// app.js — runs in the browser, no compilation, no bundling
import { formatDate } from './utils.js';
import { renderChart } from './chart.js';
const data = await fetch('/api/data').then(r => r.json());
renderChart(document.getElementById('chart'), data);
This works. It worked in 2017. It works now. No webpack. No Babel. No node_modules. No build step.
The ecosystem's response to this was, essentially, to continue doing what it was already doing.
2018–2023: The Tooling Metastasizes
The original problems — no native modules, missing ES6 syntax, browser fragmentation — were largely solved. New problems appeared to justify the existing infrastructure:
Bundle size. If you're sending a lot of JavaScript, tree shaking removes the code you don't use. Fair. But most applications became large enough to need tree shaking because they were consuming massive npm packages that were themselves designed for bundled environments. The tool created the problem it then solved.
TypeScript. Genuinely useful. Requires compilation. However: Deno runs TypeScript natively, and JSDoc types give you type checking in plain JavaScript. "We use TypeScript" stopped being sufficient justification for a build step.
Framework requirements. React's JSX isn't standard JavaScript. Vue's single-file components aren't standard HTML. These are framework-specific syntaxes that require tooling. But: Preact works with native ESM. Lit works with native ESM. Vue 3 works with native ESM. The framework requiring a build step is a property of the specific framework, not of "building web applications."
Developer experience. Hot module replacement, fast refresh, dev proxies — these are genuinely nice. They're also not production requirements. You can have a good development experience without webpack.
What happened between 2018 and 2023 is that the tooling layer got faster (Vite, esbuild, SWC), more opinionated (Create React App, Next.js, Nuxt), and more deeply embedded. The build step got harder to avoid, not because the underlying requirements changed, but because the ecosystem assumed it.
What We're Actually Dealing With
Here's the honest accounting of what a modern JavaScript build pipeline does and whether you need it:
| Build step | Why it exists | Do you need it? |
|---|---|---|
| Module bundling | Browser had no native modules | No, since 2017 |
| Transpilation (ES6→ES5) | Browser compatibility | No, unless you support IE11 (you don't) |
| JSX compilation | JSX isn't valid JavaScript | Only if you use JSX |
| TypeScript compilation | TypeScript isn't valid JavaScript | Only if you use TypeScript |
| CSS preprocessing | CSS lacked variables, nesting | No, since 2022–2023 |
| Minification | Reduces file sizes | Yes, for production — one step |
| Tree shaking | Removes unused code | Only if you import large libraries |
| Code splitting | Loads code on demand | Available natively with dynamic import() |
Half this list hasn't been necessary for years. The other half is only necessary if you've made specific choices — usually choices that were themselves influenced by an ecosystem that assumed bundling.
The Feedback Loop
The worst part isn't the complexity. It's the self-reinforcing nature of it.
New developers learn React with Create React App (or Vite). The tool hides all build configuration. They graduate to a job where webpack configuration exists. They need to understand it, modify it, maintain it. They learn webpack. They become the person who maintains the webpack config. When they write tutorials or start new projects, they reach for the tools they know. They don't question whether the tools are necessary because the tools have always been there.
This is not a conspiracy. Nobody decided to make web development complex. It emerged from a series of individually reasonable decisions made when the browser genuinely couldn't do what developers needed, and it persisted because ecosystems are sticky.
The people who wrote Browserify in 2011 were solving a real problem. The people who wrote webpack were solving real problems. The people who kept improving the tooling were making things genuinely better. And the result, now, is a default ecosystem configuration that is dramatically more complex than most applications require.
The Counter-Argument You're Already Making
"But I need TypeScript." Maybe. JSDoc types give you type inference in VS Code without compilation, and tsc --noEmit checks your types without producing a build artifact. Deno runs TypeScript natively.
"But I need npm packages." Import maps let you use npm-compatible packages from CDNs like esm.sh and jspm.io in the browser, by bare specifier, without a local install.
"But tree shaking." If your application is large enough to meaningfully benefit from tree shaking, this book will tell you that honestly in Chapter 11. Most applications aren't.
"But deployment." A directory of HTML, CSS, and JavaScript files deploys to Netlify, Vercel, Cloudflare Pages, S3, or any web server in the world. Zero build configuration required.
What This Book Is
This book is a systematic examination of what you can do without a build step, using the current web platform — not the web platform of 2016, but the one that exists right now in every modern browser.
It covers:
- Native ES modules and what the browser actually does when it loads them
- Import maps for dependency management without npm
- Modern CSS that makes preprocessors unnecessary for most use cases
- HTML capabilities that have shipped in the last five years
- Server-side development without a compilation step
- Testing without Jest, webpack, or a fifteen-minute CI bootstrap
- Deployment without a build pipeline
- Honest assessment of when you actually do need a build step
The goal isn't to convince you that build systems are always wrong. The goal is to make the decision conscious. Use a build system because you need it, not because it was the default.
In 2017, Chrome shipped ES module support. In that same year, most tutorial sites were still teaching require(). The gap between what the platform can do and what the ecosystem assumes it can do is the territory this book covers.
You've been carrying a build system for longer than you needed to. Let's see what's underneath it.
The Browser Already Knows How to Load Files
ES Modules Are Real and They Work
Here is a complete web application:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>My App</title>
<link rel="stylesheet" href="./styles.css">
</head>
<body>
<div id="app"></div>
<script type="module" src="./app.js"></script>
</body>
</html>
// app.js
import { createRouter } from './router.js';
import { fetchUser } from './api.js';
import { renderDashboard } from './components/dashboard.js';
const router = createRouter();
const user = await fetchUser('/api/me');
renderDashboard(document.getElementById('app'), { user, router });
No webpack. No Vite. No esbuild. No package.json. No node_modules. No config files of any kind. You serve this directory and it works — in Chrome, Firefox, Safari, and Edge, on any device released in the last five years.
This chapter explains how. Not just that it works, but what the browser actually does when it loads a module, why it's been designed this way, and what this means for application architecture.
How the Browser Loads Modules
When the browser encounters <script type="module" src="./app.js">, it does something fundamentally different from what it does with <script src="./app.js">.
With a classic script, the browser fetches the file and executes it. The script runs in the global scope. Variables declared at the top level are global. If two scripts declare const user = ..., they conflict. Load order determines what's available.
With a module script, the browser:
- Fetches
app.js - Parses it, looking for
importstatements before executing any code - Fetches all imported modules in parallel
- For each fetched module, repeats step 2–3 recursively
- Builds the complete dependency graph
- Executes modules in dependency order (dependencies before dependents)
- Executes
app.jslast
This is the module loading algorithm, and it has two important properties worth understanding.
Modules Are Singletons
If two different modules both import ./utils.js, the browser fetches it once and gives both importers a reference to the same module instance. This is guaranteed by the spec and implemented in every browser.
// a.js
import { counter } from './counter.js';
counter.increment();
// b.js
import { counter } from './counter.js';
console.log(counter.value); // 1, not 0 — same instance
// main.js
import './a.js';
import './b.js';
// counter.js
export const counter = {
value: 0,
increment() { this.value++; }
};
This is actually how you want shared state to work. CommonJS did this too (modules are cached after first require). The browser's native ESM does the same thing.
Imports Are Static and Resolved Before Execution
The parser reads your import declarations before running a single line of code. This is intentional: it enables efficient parallel loading, makes circular dependency analysis possible, and prevents certain classes of bugs. It also means you can't put an import inside an if statement:
// This is a syntax error
if (condition) {
import { thing } from './thing.js'; // SyntaxError
}
// This is fine — dynamic import returns a Promise
if (condition) {
const { thing } = await import('./thing.js');
}
The static analysis restriction on import declarations is a feature, not a limitation. It's what lets bundlers (when you use them) do tree shaking, and it's what lets browsers preload your dependency graph efficiently.
The Module Specifier Problem
The one genuine friction point in native ESM is the module specifier. In Node and bundled environments, you write:
import React from 'react';
import { format } from 'date-fns';
These are "bare specifiers" — module names without a path. They don't mean anything to the browser. The browser only understands:
- Relative paths:
'./utils.js','../lib/format.js' - Absolute paths:
'/src/utils.js' - Full URLs:
'https://esm.sh/react'
Bare specifiers throw a TypeError in the browser. This is the main reason people thought "native ESM doesn't work for real applications." Import maps solve this (Chapter 4), but even without them, you can use full URLs:
import React from 'https://esm.sh/react';
import { format } from 'https://esm.sh/date-fns';
This works. It's not ideal for large dependency trees, but for small projects it's perfectly functional. And if you want bare specifiers, one JSON file in your HTML gives you that.
A Real Application Structure
Let's build something that has structure — multiple modules, shared utilities, real data fetching — without any build step.
myapp/
├── index.html
├── styles.css
├── app.js
├── router.js
├── api.js
└── components/
├── header.js
├── dashboard.js
└── user-card.js
index.html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Team Dashboard</title>
<link rel="stylesheet" href="./styles.css">
</head>
<body>
<div id="app"></div>
<script type="module" src="./app.js"></script>
</body>
</html>
api.js:
// Thin wrapper around fetch — handles JSON, base URL, errors
const BASE_URL = '/api';
async function request(path, options = {}) {
const response = await fetch(`${BASE_URL}${path}`, {
headers: { 'Content-Type': 'application/json', ...options.headers },
...options,
});
if (!response.ok) {
throw new Error(`API error: ${response.status} ${response.statusText}`);
}
return response.json();
}
export const api = {
getTeam: () => request('/team'),
getUser: (id) => request(`/users/${id}`),
updateUser: (id, data) => request(`/users/${id}`, {
method: 'PATCH',
body: JSON.stringify(data),
}),
};
components/user-card.js:
// Returns a DOM element — no framework, no JSX, just the DOM
export function UserCard({ name, role, avatar, onSelect }) {
const card = document.createElement('article');
card.className = 'user-card';
card.innerHTML = `
<img src="${avatar}" alt="${name}" width="48" height="48">
<div class="user-info">
<h3>${name}</h3>
<p>${role}</p>
</div>
`;
card.addEventListener('click', () => onSelect({ name, role }));
return card;
}
components/dashboard.js:
import { UserCard } from './user-card.js';
export function Dashboard({ container, team }) {
container.innerHTML = '<h1>Team Dashboard</h1>';
const grid = document.createElement('div');
grid.className = 'team-grid';
for (const member of team) {
const card = UserCard({
...member,
onSelect: (user) => console.log('Selected:', user),
});
grid.appendChild(card);
}
container.appendChild(grid);
}
app.js:
import { api } from './api.js';
import { Dashboard } from './components/dashboard.js';
// Top-level await works in modules — no wrapper function needed
const team = await api.getTeam();
Dashboard({
container: document.getElementById('app'),
team,
});
This is a real application. It has data fetching, multiple modules, component composition, and DOM manipulation. It runs in the browser as-is. No compilation, no bundling, no toolchain.
Top-Level Await
Notice await api.getTeam() at the top level of app.js. This works in modules. It would be a syntax error in a classic script.
Top-level await treats the module as an async function, but from the importing module's perspective, it still resolves before any code that depends on it runs. If app.js is the entry point, the browser simply waits for its top-level awaits to resolve before executing the module. If module B imports module A which uses top-level await, module B waits for module A to complete.
This means no more immediate invocation wrappers:
// Before: classic scripts needed this
(async function() {
const data = await fetchData();
render(data);
})();
// Now: modules just do this
const data = await fetchData();
render(data);
Browser support: Chrome 89, Firefox 89, Safari 15, Edge 89. If you're targeting modern browsers, it's available.
Module Scope vs. Global Scope
Modules have their own scope. Variables declared at the top level of a module are not global. This is different from classic scripts where top-level var declarations become properties of window.
// classic.js (script)
var DEBUG = true; // window.DEBUG === true
// module.js (type="module")
const DEBUG = true; // not on window, not accessible from other scripts
This is strictly better behavior, but it means you can't expose module values to the console by accident the way you could with scripts. If you need something globally accessible during development, you can always do window.myThing = myThing explicitly, but you won't do it accidentally.
It also means: if you're refactoring a classic-script application to use modules, watch for code that depends on other scripts' globals. That pattern breaks — and breaking it is correct, and you should fix it properly.
CORS and the Local Development Problem
There's one thing you need to know before you get to file:// URLs and wonder why nothing works.
Modules loaded over file:// URLs have restricted behavior because of CORS. The browser treats each file as a different origin, so cross-file imports fail. This is not a bug; it's CORS doing its job.
You need a local HTTP server to develop with native ES modules.
This is less painful than it sounds:
# Python (installed on almost everything)
python3 -m http.server 8080
# Node (if you have it)
npx serve .
# Deno
deno run --allow-net --allow-read https://deno.land/std/http/file_server.ts
# Caddy (if installed)
caddy file-server --listen :8080
Start one of these in your project directory and open http://localhost:8080. That's your entire dev setup. No webpack dev server, no config, no plugins. A directory and a file server.
Performance: Bundlers vs. Native Loading
The question that should occur to you: if the browser makes one HTTP request per module, doesn't that get slow?
For small applications: no. HTTP/2 (which any modern server and CDN supports) multiplexes requests over a single connection. Fetching 20 small modules is not meaningfully slower than fetching one large bundle.
For large applications: yes, eventually. If you have hundreds of modules, the waterfall of module graph resolution can add latency. This is where bundling at deploy time (not during development) makes sense — but it's a deployment optimization, not a development requirement.
The ecosystem inverted this. Bundling became the default development experience, and unbundled development was the exception for production. Native ESM inverts it back: develop with the browser's actual module system, optimize for production separately if you need to.
A few practical data points:
- Google's Squoosh uses no bundler in development
- Preact is 3KB and works with native ESM directly
- Most "small to medium" applications (< 50–100 modules) have imperceptible load time differences between bundled and unbundled
The performance argument for bundling is real but often applied where it doesn't matter. A corporate internal tool that serves 50 users doesn't have a performance problem that requires webpack.
Module Preloading
If you do have a deep module graph and want to preload it, there's a native mechanism:
<link rel="modulepreload" href="./components/dashboard.js">
<link rel="modulepreload" href="./components/user-card.js">
<link rel="modulepreload" href="./api.js">
modulepreload tells the browser to fetch and parse these modules before they're imported, eliminating the waterfall. You get bundle-like performance without a bundler. This is supported in Chrome, Edge, and (as of 2022) Safari and Firefox.
What You Get That Bundlers Can't Give You
Native ESM isn't just "bundlers but slower." It has properties that bundled code doesn't:
Source maps are your actual source. When you inspect a network error or debug in DevTools, the source you see is the actual file you wrote — not a minified bundle with a source map approximation. The module URL in an error stack trace is real and navigable.
Module caching is semantically correct. The browser caches modules by URL. If you cache-bust properly (by version in the URL), old modules expire correctly. With bundles, you cache-bust the whole bundle when any file changes.
Incremental loading. With dynamic import(), code only loads when it's needed. Bundlers simulate this with code splitting; native ESM does it natively with no configuration.
Live development without HMR. Browser caches modules for the session. A hard refresh (Cmd+Shift+R) gives you a clean slate. For many types of changes, this is all you need.
The browser's module system is not a subset of what bundlers provide. For many classes of application, it's a superset — semantically correct, debuggable, and free of configuration. The next chapter goes deeper into what native ESM looks like when you take it seriously as a production architecture.
What you've seen here is that the baseline capability — multiple modules, dependency resolution, top-level await, component composition — is available without any tooling at all. The browser knew how to load files all along. You just had to let it.
Native ESM in the Browser: No Bundler Required
Module loading, dynamic imports, and performance — without a build step
The previous chapter showed a working application. This one goes deeper — into how the browser's module system actually behaves, what you can do with it that bundlers simulate awkwardly, and where the interesting edges are.
Static vs. Dynamic Imports
Native ESM gives you two ways to import modules, and they have meaningfully different semantics.
Static imports appear at the top of a file, are resolved before any code runs, and create a live binding to the exported values:
import { formatDate, parseDate } from './date-utils.js';
Dynamic imports are expressions that return a Promise, can appear anywhere in your code, and are the native answer to code splitting:
// Load a module only when a button is clicked
document.getElementById('load-chart').addEventListener('click', async () => {
const { renderChart } = await import('./chart.js');
renderChart(data);
});
Bundlers simulate dynamic imports with code splitting. With native ESM, it just works. The browser fetches chart.js (and its dependencies) when the import expression resolves, caches it, and you're done. No webpack magic numbers, no chunk file naming configuration, no async chunk loading infrastructure. It's a Promise that resolves to a module.
Conditional Module Loading
Dynamic imports are how you do conditional loading without a build step:
// Load the right locale module
const locale = navigator.language.split('-')[0];
const { messages } = await import(`./i18n/${locale}.js`).catch(() =>
import('./i18n/en.js') // Fallback to English
);
// Polyfill only when needed
if (!('IntersectionObserver' in window)) {
await import('./polyfills/intersection-observer.js');
}
// Development-only tooling
if (location.hostname === 'localhost') {
const { setupDevTools } = await import('./dev/tools.js');
setupDevTools();
}
That last one deserves attention. With a bundler, shipping dev tooling to production requires configuration — environment variables, build-time dead code elimination, tree shaking. With native ESM and a hostname check, the dev code simply never loads in production because the import expression never executes. No bundler, no config, no env vars.
Live Bindings: ESM's Unexpected Feature
One of ESM's less-discussed properties is that exported bindings are live. When a module exports a variable and that variable changes, importers see the new value.
// counter.js
export let count = 0;
export function increment() {
count++; // This updates the exported binding
}
// main.js
import { count, increment } from './counter.js';
console.log(count); // 0
increment();
console.log(count); // 1 — the binding updated
This is different from CommonJS, where module.exports.count would be a snapshot at import time. With ESM, you're importing a reference to the binding, not a copy of the value.
In practice, this matters most for:
- Module-level state shared across multiple importers
- Circular dependencies (live bindings make them tractable)
- Re-exported values from other modules
It's the kind of detail that doesn't matter until it does, and when it does, it explains behavior that would otherwise look like a bug.
Import Assertions and Module Types
The browser's module system has extended beyond JavaScript. You can import JSON natively:
import config from './config.json' with { type: 'json' };
console.log(config.apiUrl);
Support: Chrome 91+, Edge 91+, Firefox 127+, Safari 17.2+. The syntax changed from assert (old) to with (current) — use with.
CSS modules are in development and shipping progressively, but the intent is:
import styles from './component.css' with { type: 'css' };
document.adoptedStyleSheets = [styles];
For now, CSS in modules is better handled with Constructable Stylesheets or just plain <link> tags. But the direction is clear: the browser's module system is becoming a general import mechanism for web resources, not just JavaScript files.
Module Workers
Web Workers support ES modules, which means you can write modern, modular worker code without a bundler to transform it:
// main.js
const worker = new Worker('./processor.worker.js', { type: 'module' });
worker.postMessage({ data: largeArray });
worker.onmessage = (e) => console.log('Result:', e.data);
// processor.worker.js
import { expensive } from './computations.js';
self.onmessage = (e) => {
const result = expensive(e.data.data);
self.postMessage(result);
};
The Worker receives type: 'module' and gets the full ESM feature set — imports, top-level await, all of it. Without bundlers, this was historically impossible because bundlers were what made workers work with module syntax. Now it's a { type: 'module' } option.
Support: Chrome 80+, Edge 80+, Safari 15+, Firefox 114+.
Service Workers and ESM
Service workers are the exception — they don't support ES modules in all environments yet, and the story is complicated. Firefox added support in 2023. Chrome and Safari had it earlier. If you're writing a service worker, you may need to keep it as a classic script or use a minimal build step for that file alone.
This is an honest limit. Service workers have a separate module loading context, and it took longer to standardize and implement. If your architecture depends heavily on service workers, keep them in classic script format for now, and import the modules your service worker needs via dynamic import if needed.
The Import Chain: What Actually Happens in DevTools
Open DevTools, go to the Network tab, and load a page with native ES modules. What you'll see is illuminating.
Each module appears as a separate request, with:
- Request type:
script - Initiator: the file that imported it
- Priority: High (for statically imported modules)
The requests aren't sequential. The browser discovers all direct imports in a file, fires them in parallel, and then processes their imports. The parallelism is good. The cascade — waiting for module A to load before discovering that module A imports modules B and C — is the cost.
For a module graph that's three levels deep, you have three round trips before all code is loaded, even if every file is tiny and your server is local. This is why modulepreload exists, and it's why very large unbundled applications can feel slow on the initial load.
A worked example: an application with 30 modules in a graph 4 levels deep, each module 2KB, on a 100ms RTT connection. Four round trips × 100ms = 400ms baseline latency, even though all 30 modules are 60KB total. Bundled into one file: 60KB, one round trip, 100ms. The bundle wins on initial load.
This is where the build-vs-no-build trade-off becomes real. More on this in Chapter 11.
A Complete Module Architecture Example
Here's how a real, unbundled application might organize its modules. This is for a project management tool — not toy complexity:
src/
├── main.js # Entry point, top-level await
├── router.js # Client-side routing
├── store.js # Shared application state
├── api/
│ ├── index.js # Re-exports all API functions
│ ├── projects.js # Project CRUD
│ ├── tasks.js # Task CRUD
│ └── users.js # User management
├── components/
│ ├── app.js # Root component
│ ├── nav.js # Navigation
│ ├── project-list.js # Project listing view
│ ├── project-detail.js # Project detail view (lazy)
│ ├── task-board.js # Task board (lazy)
│ └── settings.js # Settings page (lazy)
└── utils/
├── date.js # Date formatting
├── dom.js # DOM helpers
└── events.js # Event emitter
// main.js
import { initRouter } from './router.js';
import { initStore } from './store.js';
import { App } from './components/app.js';
// These are statically imported and load in parallel
const store = await initStore();
const router = initRouter();
App({ mount: document.getElementById('app'), store, router });
// router.js
export function initRouter() {
const routes = {
'/': () => import('./components/project-list.js'),
'/projects/:id': () => import('./components/project-detail.js'),
'/projects/:id/board': () => import('./components/task-board.js'),
'/settings': () => import('./components/settings.js'),
};
// Simple pattern matching router
async function navigate(path) {
const [pattern, loader] = Object.entries(routes)
.find(([p]) => matchPath(p, path)) ?? [];
if (!loader) return renderNotFound();
const { default: Component } = await loader(); // Dynamic import
const params = extractParams(pattern, path);
Component({ mount: document.getElementById('main'), params });
}
window.addEventListener('popstate', () => navigate(location.pathname));
return { navigate };
}
The lazy-loaded components — project-detail.js, task-board.js, settings.js — only load when their route is visited. This is native code splitting. No webpack configuration, no Vite plugin, no chunk strategy. Just import().
The statically imported modules — router.js, store.js, components/app.js — load in parallel at startup because the browser reads all static imports before executing any code.
ESM and the type: "module" Script Tag Differences
Beyond just loading modules, <script type="module"> has several behavioral differences from classic <script>:
Deferred by default. Module scripts never block HTML parsing. They're equivalent to <script defer> — the HTML is parsed in full before the script executes. You don't need defer or DOMContentLoaded wrappers.
Executed once. If you include the same module script twice, it only executes once. The second reference is a no-op.
Strict mode. Module scripts run in strict mode always. You can't use with, can't access arguments.caller, can't do a lot of the things you shouldn't be doing anyway.
Cross-origin. Modules fetched from other origins require CORS headers (Access-Control-Allow-Origin). Classic scripts don't. This is why CDN-hosted ESM packages need to send the right headers — and they do, if they're designed for it.
import.meta is available. This is a module-specific meta-object:
// The URL of the current module
console.log(import.meta.url);
// "https://example.com/src/components/nav.js"
// Resolve a URL relative to this module
const dataUrl = new URL('../data/config.json', import.meta.url);
import.meta.url is surprisingly useful. It lets you reference files relative to the current module without knowing where the module is in the URL structure — the same problem that __dirname solves in CommonJS, solved natively.
// Load a worker relative to this module's location
const workerUrl = new URL('./processor.worker.js', import.meta.url);
const worker = new Worker(workerUrl, { type: 'module' });
Circular Dependencies
Circular dependencies are possible with ESM and handled correctly because of live bindings. If module A imports from B and B imports from A, the browser:
- Starts loading A
- Discovers A imports B, starts loading B
- Discovers B imports A — but A is already being loaded
- Gives B a reference to A's (currently incomplete) binding table
- Finishes loading B, executing it (A's exports are undefined at this point)
- Finishes loading A, filling in A's exports
- B's binding references to A now see the correct values
This means circular dependencies "work," but with a caveat: if B's initialization code directly accesses A's exports at module evaluation time (not in a function called later), those exports are undefined. If B's code only accesses A's exports inside functions that are called after full initialization, everything works correctly.
The practical rule: circular dependencies are fine if both modules export functions that call each other. They're fragile if either module runs code at the top level that depends on the other.
// This is fine — functions reference each other, calls happen after init
// a.js
import { b } from './b.js';
export function a() { return b() + 1; }
// b.js
import { a } from './a.js';
export function b() { return 42; } // doesn't call a() at module init time
// This is fragile — B uses A's export at initialization time
// a.js
import { value } from './b.js';
export const a = value + 1; // value is undefined when b.js initializes
// b.js
import { a } from './a.js';
export const value = a * 2; // a is undefined here, value becomes NaN
The spec defines this precisely. Bundlers have to simulate it. Native ESM in the browser implements it correctly.
Native ESM is a real, complete module system. It's not a preview or a polyfill or a subset of what you get from webpack. In some ways — live bindings, import.meta.url, native dynamic import — it has capabilities that bundlers have to approximate.
The constraint is the HTTP loading model, and the next chapter addresses that directly: how import maps let you use bare specifiers without npm, and how to manage dependencies in a zero-build world.
Import Maps: Dependency Management Without Node Modules
Bare specifiers, CDN dependencies, version pinning — with one JSON blob
The previous two chapters have been quietly avoiding a problem. Every import has used a relative path:
import { formatDate } from './utils/date.js';
import { renderChart } from './components/chart.js';
What about third-party dependencies? In Node and bundled applications you write:
import { format } from 'date-fns';
import confetti from 'canvas-confetti';
These are bare specifiers — names without paths. The browser doesn't know what 'date-fns' means. It can't infer a URL from a package name. If you try this in a browser, you get:
Uncaught TypeError: Failed to resolve module specifier "date-fns".
Relative references must start with either "/", "./", or "../".
Import maps are the native solution to this. They're a JSON structure in your HTML that maps bare specifiers to URLs. One element, one JSON object, and you have the same bare-specifier ergonomics as npm — without npm.
The Basic Syntax
<!DOCTYPE html>
<html>
<head>
<script type="importmap">
{
"imports": {
"date-fns": "https://esm.sh/date-fns@3.6.0",
"canvas-confetti": "https://esm.sh/canvas-confetti@1.9.3"
}
}
</script>
</head>
<body>
<script type="module" src="./app.js"></script>
</body>
</html>
// app.js — this now works in the browser, no bundler
import { format, addDays } from 'date-fns';
import confetti from 'canvas-confetti';
const tomorrow = format(addDays(new Date(), 1), 'MMMM do');
document.getElementById('date').textContent = `Tomorrow: ${tomorrow}`;
document.getElementById('celebrate').addEventListener('click', () => {
confetti({ particleCount: 100, spread: 70 });
});
The import map must appear before any module scripts. The browser processes it first and uses it to resolve specifiers when modules load. There can only be one import map per page.
Path Mapping: Packages with Multiple Exports
Many packages have subpath exports — lodash/get, react/jsx-runtime, date-fns/format. Import maps handle this with trailing-slash mappings:
{
"imports": {
"lodash/": "https://esm.sh/lodash-es/",
"date-fns/": "https://esm.sh/date-fns/"
}
}
// Both of these work
import get from 'lodash/get';
import { format } from 'date-fns/format';
The trailing slash on both sides tells the browser: "for any specifier starting with lodash/, replace that prefix with https://esm.sh/lodash-es/ and keep the rest." So lodash/get becomes https://esm.sh/lodash-es/get.
Scoped Mappings
Different parts of your application might need different versions of the same package. Import maps support scoped resolution:
{
"imports": {
"lodash": "https://esm.sh/lodash-es@4.17.21"
},
"scopes": {
"/legacy/": {
"lodash": "https://esm.sh/lodash@3.10.1"
}
}
}
Modules loaded from /legacy/ get lodash 3. Everything else gets lodash 4. This is a power feature — you probably won't need it often, but when you're migrating a large application gradually, it's the right tool.
CDNs for ESM Packages
To use packages in the browser without npm, you need them served as ES modules. Several CDNs do this:
esm.sh — Converts npm packages to ES modules on the fly. Most packages work. Supports subpath exports, TypeScript types, and version pinning.
https://esm.sh/react@18.3.1
https://esm.sh/preact@10.22.1
https://esm.sh/date-fns@3.6.0/format
jspm.io — Another npm-to-ESM CDN with a generator tool at jspm.io/generator that builds the entire import map for you.
jsdelivr (via the /esm/ path) — Widely used CDN, ESM support for packages that publish ES modules.
unpkg — Serves npm package files directly. Not all packages expose proper ES modules, but many do.
The practical recommendation: use esm.sh or jspm.io. Both are specifically designed for browser-native ES module consumption and handle the CommonJS-to-ESM conversion that most packages still need.
Generating an Import Map
For a non-trivial dependency tree, writing the import map by hand is tedious and error-prone. The JSPM generator handles this:
# Install the JSPM CLI
npm install -g jspm # Or: npx jspm
# Generate an import map for your dependencies
jspm install react preact date-fns lodash-es
# Output: importmap.json and updated index.html
Or use the web UI at jspm.io/generator — paste in your package list, get an import map back. Copy it into your HTML.
The resulting import map includes not just your direct dependencies but their transitive dependencies, pinned to exact versions. This is what package-lock.json does, but as a JSON blob you include in HTML.
{
"imports": {
"react": "https://esm.sh/react@18.3.1",
"react-dom": "https://esm.sh/react-dom@18.3.1",
"react-dom/client": "https://esm.sh/react-dom@18.3.1/client",
"scheduler": "https://esm.sh/scheduler@0.23.2"
}
}
A Full Working Example: React Without npm
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>React Without npm</title>
<script type="importmap">
{
"imports": {
"react": "https://esm.sh/react@18.3.1",
"react-dom/client": "https://esm.sh/react-dom@18.3.1/client",
"htm": "https://esm.sh/htm@3.1.1"
}
}
</script>
</head>
<body>
<div id="root"></div>
<script type="module">
import { useState, useEffect } from 'react';
import { createRoot } from 'react-dom/client';
import htm from 'htm';
const html = htm.bind(React.createElement);
function Counter() {
const [count, setCount] = useState(0);
return html`
<div>
<p>Count: ${count}</p>
<button onClick=${() => setCount(count + 1)}>+</button>
<button onClick=${() => setCount(count - 1)}>-</button>
</div>
`;
}
const root = createRoot(document.getElementById('root'));
root.render(html`<${Counter} />`);
</script>
</body>
</html>
Wait — if React requires JSX, and JSX requires compilation, how is this working?
HTM (Hyperscript Markup Language) is a library from the Preact team that provides JSX-like syntax using tagged template literals. html\<${Component} />`is valid JavaScript that compiles toReact.createElement(Component)` at runtime. No Babel, no JSX transform, no build step.
This is a legitimate React application. It has hooks, state, event handlers. It runs in the browser as-is.
Note: import React isn't in the inline script — it's being accessed globally because react from esm.sh attaches to globalThis.React. A cleaner approach uses Preact, which has first-class ESM support:
<script type="importmap">
{
"imports": {
"preact": "https://esm.sh/preact@10.22.1",
"preact/hooks": "https://esm.sh/preact@10.22.1/hooks",
"htm/preact": "https://esm.sh/htm@3.1.1/preact"
}
}
</script>
import { render } from 'preact';
import { useState } from 'preact/hooks';
import { html } from 'htm/preact';
function App() {
const [count, setCount] = useState(0);
return html`
<div>
<p>Count: ${count}</p>
<button onClick=${() => setCount(c => c + 1)}>+</button>
</div>
`;
}
render(html`<${App} />`, document.getElementById('root'));
Preact with HTM is production-grade. The Preact team recommends it explicitly for build-free environments.
Version Pinning and Integrity
One reasonable concern with CDN-hosted dependencies: what if the CDN changes the file? You're at the mercy of esm.sh/date-fns@3.6.0 serving the same bytes tomorrow.
The answer is Subresource Integrity (SRI). You can add an integrity attribute to script tags, and the browser will refuse to execute the script if the hash doesn't match:
<script type="module"
src="https://esm.sh/preact@10.22.1"
integrity="sha384-abc123...">
</script>
For import maps themselves, SRI support is in development (the integrity field in import maps is a proposed feature). In the meantime, your options are:
- Trust the CDN's version pinning.
@10.22.1should be immutable on any reputable CDN. - Self-host your dependencies. Download the ES module versions and serve them from your own CDN or static host.
- Use a lock file alternative. Tools like Deno's
deno.lockpin CDN dependencies by content hash.
Self-hosting is simpler than it sounds:
# Download the ESM version of a package
curl -o vendor/preact.js https://esm.sh/preact@10.22.1
curl -o vendor/preact-hooks.js https://esm.sh/preact@10.22.1/hooks
{
"imports": {
"preact": "/vendor/preact.js",
"preact/hooks": "/vendor/preact-hooks.js"
}
}
Now you control the files. They're in your repo, they're version-controlled, and you can audit them. This is more work upfront, but it's the most conservative approach.
Browser Support
Import maps are supported in:
- Chrome 89+
- Edge 89+
- Safari 16.4+
- Firefox 108+
As of 2024, all modern browsers support import maps. If you need to support older browsers, es-module-shims is a polyfill that implements import maps for browsers that don't have them natively:
<script async src="https://esm.sh/es-module-shims@1.10.0"></script>
<script type="importmap">{ ... }</script>
es-module-shims also enables features like modulepreload polyfilling and JSON module assertions in browsers that don't support them yet.
The importmap.json Pattern
For applications with many dependencies, keeping the import map inline in HTML gets unwieldy. A common pattern is to maintain importmap.json and inject it at build time — or, since this is zero-build, just reference it:
// You can't reference an external import map in standard HTML yet,
// but you can generate the script tag dynamically at server render time,
// or use a simple server-side template.
Actually, the spec doesn't support <script type="importmap" src="..."> — the import map must be inline. For server-rendered applications, this is fine: inject the JSON server-side. For static sites, you have two clean options:
- Keep it inline in
index.htmland accept thatindex.htmlhas a JSON blob in it. - Use a trivial build step — just a
cat importmap.jsoninjected intoindex.html— which doesn't require a bundler.
The purist answer: inline JSON in HTML is not a moral failing. It's fine.
Comparing Import Maps to npm
| Feature | npm + bundler | Import maps |
|---|---|---|
| Bare specifiers | Yes | Yes |
| Version pinning | package-lock.json | Import map JSON |
| Subpath exports | package.json exports | Trailing-slash mappings |
| Scoped resolutions | Not natively | scopes field |
| Transitive deps | Bundled automatically | CDN resolves, or manual |
| Type definitions | @types packages | .d.ts from CDN (esm.sh) |
| Offline dev | node_modules folder | Vendor files, or network |
| Auditing | npm audit | Manual / self-hosted |
| Tree shaking | Bundler does it | Not automatically |
The npm model wins on tooling support — the entire Node ecosystem assumes it. The import map model wins on simplicity: one JSON file, no local installs, no node_modules folder, no version conflicts between what's installed and what's in lockfile.
For applications that don't need tree shaking (most internal tools, small consumer apps, prototypes) and don't have complex dependency graphs, import maps are genuinely sufficient. For applications that depend on large libraries and care deeply about bundle size, you're going to want a build step eventually — and Chapter 11 will tell you when.
With import maps in hand, you have the three primitives of modern zero-build development: the browser's module system, dynamic imports for code splitting, and import maps for bare-specifier dependencies. The next chapter looks at Deno, which takes these primitives and builds an entire runtime philosophy around them.
Deno and the Zero Config Philosophy
URL imports, built-in TypeScript, no config required
Deno is what happens when you build a JavaScript runtime from scratch in 2018, with the benefit of knowing what you'd do differently in Node.
Ryan Dahl, who wrote Node, gave a talk in 2018 called "10 Things I Regret About Node.js." The list included: node_modules, package.json, require() without extensions, and the complexity that had accumulated around what was supposed to be a simple thing. Deno is, in part, his correction.
The zero-config philosophy isn't accidental. It's the point.
What Deno Does Differently
Start with what you don't need:
- No
package.json - No
node_modulesfolder - No
npm install - No Babel config
- No TypeScript config (TypeScript runs natively)
- No webpack/esbuild/Vite
- No
tsconfig.jsonto placate
A Deno program:
// server.ts — TypeScript, no compilation step
import { serve } from "jsr:@std/http/file-server";
serve({ port: 8080 });
Run it:
deno run --allow-net --allow-read server.ts
That's it. TypeScript runs directly. The standard library imports from JSR (JavaScript Registry). No install step. First run fetches and caches dependencies; subsequent runs use the cache.
URL Imports and the JSR Registry
Deno's original dependency model used direct URL imports:
import { assertEquals } from "https://deno.land/std@0.224.0/assert/mod.ts";
This is radical honesty: you're importing from a URL. The browser does this. HTTP is a package manager. The version is in the URL. The checksum is in the lockfile (deno.lock). There's no indirection through a local package store.
In 2024, Deno introduced JSR (JavaScript Registry) as a first-class package registry:
// JSR imports — similar ergonomics to npm, but designed for ESM
import { parseArgs } from "jsr:@std/cli/parse-args";
import { join } from "jsr:@std/path";
import { Hono } from "jsr:@hono/hono";
JSR packages are TypeScript-first, publish source code (not compiled output), and work in Deno, Node, Bun, and browser environments. It's closer to what npm should have been if it had been designed after ES modules existed.
Node compatibility: Deno also supports npm: specifiers for when you need something from npm:
import express from "npm:express";
import { z } from "npm:zod";
This runs the npm package in Deno's npm compatibility layer — no npm install, no node_modules. Deno fetches it, converts it if needed, and caches it.
A Real Web Server in TypeScript, No Config
// api.ts
interface User {
id: number;
name: string;
email: string;
}
const users: User[] = [
{ id: 1, name: "Alice", email: "alice@example.com" },
{ id: 2, name: "Bob", email: "bob@example.com" },
];
function json(data: unknown, status = 200): Response {
return new Response(JSON.stringify(data), {
status,
headers: { "Content-Type": "application/json" },
});
}
function router(req: Request): Response {
const url = new URL(req.url);
if (url.pathname === "/api/users" && req.method === "GET") {
return json(users);
}
const match = url.pathname.match(/^\/api\/users\/(\d+)$/);
if (match && req.method === "GET") {
const user = users.find((u) => u.id === Number(match[1]));
return user ? json(user) : json({ error: "Not found" }, 404);
}
return json({ error: "Not found" }, 404);
}
Deno.serve({ port: 8000 }, router);
console.log("Listening on http://localhost:8000");
deno run --allow-net api.ts
This is a type-checked TypeScript HTTP server. No tsconfig. No compilation. No dependencies to install. The TypeScript runs directly in the Deno runtime. Errors are type errors, shown in the terminal, referencing your actual source file.
Compare the setup cost to an equivalent Node/TypeScript project:
# Node + TypeScript setup
npm init -y
npm install typescript ts-node @types/node express @types/express
npx tsc --init
# Edit tsconfig.json
# Write the server
# npm start or ts-node src/api.ts
vs:
# Deno setup
# Write the server
deno run --allow-net api.ts
The Deno version is smaller by literally every metric: lines of setup, files created, disk space used, things that can go wrong.
Using Hono for Real HTTP Applications
For production HTTP handling, Hono is the recommended framework in the Deno ecosystem. It's fast, fully typed, and works across Deno, Bun, Node, and edge runtimes:
// app.ts
import { Hono } from "jsr:@hono/hono";
import { cors } from "jsr:@hono/hono/cors";
import { logger } from "jsr:@hono/hono/logger";
interface Task {
id: string;
title: string;
done: boolean;
createdAt: string;
}
const app = new Hono();
const tasks = new Map<string, Task>();
app.use("*", logger());
app.use("/api/*", cors());
app.get("/api/tasks", (c) => {
return c.json(Array.from(tasks.values()));
});
app.post("/api/tasks", async (c) => {
const body = await c.req.json<{ title: string }>();
const task: Task = {
id: crypto.randomUUID(),
title: body.title,
done: false,
createdAt: new Date().toISOString(),
};
tasks.set(task.id, task);
return c.json(task, 201);
});
app.patch("/api/tasks/:id", async (c) => {
const id = c.req.param("id");
const task = tasks.get(id);
if (!task) return c.json({ error: "Not found" }, 404);
const updates = await c.req.json<Partial<Task>>();
tasks.set(id, { ...task, ...updates });
return c.json(tasks.get(id));
});
app.delete("/api/tasks/:id", (c) => {
const id = c.req.param("id");
if (!tasks.has(id)) return c.json({ error: "Not found" }, 404);
tasks.delete(id);
return c.body(null, 204);
});
Deno.serve({ port: 8000 }, app.fetch);
deno run --allow-net app.ts
Full CRUD REST API, typed, with CORS and logging middleware. One command to run. No config files. No install step on first run beyond Deno itself.
The Permission Model
Deno's security model is explicit permissions. Programs can't read files, access the network, or spawn processes without being granted those permissions. This is uncomfortable for people used to Node's implicit "I can do anything" model, and it's exactly the right default.
Common permissions:
--allow-net # All network access
--allow-net=api.github.com # Only this host
--allow-read # All file reads
--allow-read=/var/data # Only this path
--allow-write=/tmp # Only this path for writes
--allow-env # Environment variables
--allow-run # Spawning subprocesses
For development, --allow-all (-A) is the escape hatch. Don't use it in production without thinking.
The "aha" moment with permissions: when you run a third-party Deno script and it tries to make a network request your source code doesn't make, Deno stops and tells you. Compare this to Node, where a compromised npm package can exfiltrate your environment variables in silence. Deno's permission model isn't just inconvenience — it's a meaningful security boundary.
deno.json: Minimal Configuration
Deno does have a config file, deno.json, but it's optional and its defaults are sensible:
{
"tasks": {
"dev": "deno run --watch --allow-net --allow-read app.ts",
"test": "deno test",
"fmt": "deno fmt"
},
"imports": {
"@hono/hono": "jsr:@hono/hono@^4.4.0",
"@std/http": "jsr:@std/http@^0.224.0"
}
}
The imports field in deno.json is effectively an import map for your project — you pin versions here and import by bare specifier everywhere else. Deno generates deno.lock to pin exact versions.
This is meaningfully simpler than a Node project's configuration surface area (package.json, tsconfig.json, .eslintrc, .prettierrc, jest.config.js, babel.config.js, webpack.config.js, .env, .env.local...). A Deno project at maximum needs two files: deno.json and deno.lock.
File Watching Without Nodemon
Deno has --watch built in:
deno run --watch --allow-net app.ts
When any source file changes, Deno restarts automatically. No nodemon, no ts-node-dev, no configuration. The --watch-exclude flag excludes paths if needed.
Testing Without Jest
Deno's built-in test runner:
// api_test.ts
import { assertEquals, assertRejects } from "jsr:@std/assert";
import { app } from "./app.ts";
Deno.test("GET /api/tasks returns empty array initially", async () => {
const req = new Request("http://localhost/api/tasks");
const res = await app.fetch(req);
assertEquals(res.status, 200);
assertEquals(await res.json(), []);
});
Deno.test("POST /api/tasks creates a task", async () => {
const req = new Request("http://localhost/api/tasks", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ title: "Write the book" }),
});
const res = await app.fetch(req);
assertEquals(res.status, 201);
const task = await res.json();
assertEquals(task.title, "Write the book");
assertEquals(task.done, false);
});
deno test
No configuration. Type-checked. Coverage available with --coverage. The test runner is built into the runtime the same way deno fmt, deno lint, and deno doc are. You don't need a separate toolchain for anything a modern project needs.
Formatting and Linting Without Config
deno fmt # Formats all TypeScript/JavaScript files
deno fmt --check # CI: exits non-zero if files need formatting
deno lint # Lints with sensible defaults
deno doc app.ts # Generates documentation from JSDoc comments
deno fmt uses the same opinionated formatter regardless of project — no .prettierrc debates, no "tabs vs spaces" config variable, no project-specific formatting rules that new contributors need to discover. The formatter is the formatter.
This is the zero-config insight: the friction of agreeing on configuration is often more expensive than the value of the configuration. Opinionated defaults with no escape hatch are a feature.
Compiling to a Single Binary
This is where Deno goes somewhere Node can't easily go:
deno compile --allow-net --allow-read app.ts -o app
./app # Runs on any machine, no Deno required
deno compile produces a self-contained binary that includes the Deno runtime and your application code. You can ship this to a server with no runtime installed, no npm install, no dependency management. It's one file.
# Cross-compile for different targets
deno compile --target x86_64-unknown-linux-gnu app.ts -o app-linux
deno compile --target aarch64-apple-darwin app.ts -o app-mac-arm
deno compile --target x86_64-pc-windows-msvc app.ts -o app.exe
For the server chapter (Chapter 8), this is significant: a Go-style single-binary deployment from a TypeScript source. The operational simplicity is real.
Deno Deploy: Serverless Without the Config
Deno Deploy runs your Deno application at the edge in 35+ regions, globally, with:
- No infrastructure configuration
- No Docker
- No IAM roles
- No cold starts (it's actually fast)
- Free tier that's genuinely useful
# Install deployctl
deno install -gAf jsr:@deno/deployctl
# Deploy
deployctl deploy app.ts
Your application is live in under a minute. The "build step" for deployment is: there isn't one. Deno Deploy runs your TypeScript directly.
This is not just convenience. It's a different mental model: your development environment and your production environment are the same runtime, running the same source files. The discrepancy between "what runs on my machine" and "what runs in production" narrows to zero.
The Honest Limitations
Deno is not Node. The npm ecosystem is vast, and while Deno's npm compatibility is good, not everything works:
- Native addons (
.nodefiles, bindings to C/C++ libraries) don't work - Some packages that assume Node internals don't work or work poorly
- The Deno-native ecosystem is smaller than npm's
If your application depends heavily on packages that aren't available in pure JavaScript form, or if you have team members deeply invested in Node tooling, Deno is harder to adopt incrementally.
That said: most web applications, APIs, and tooling are pure JavaScript, and those work fine. The limitation is real but narrower than it appears.
Deno is the most complete realization of zero-config development: a runtime that runs TypeScript natively, manages dependencies via URLs and a lockfile, ships every tool you need as built-in commands, and compiles to single binaries for deployment. It didn't just skip the build step — it designed a runtime where the build step was never the right answer.
The next chapter returns to the browser, and to CSS — which has quietly become good enough to make preprocessors optional.
Modern CSS Without Sass
Custom properties, nesting, layers, container queries — these are just CSS now
Sass was created in 2006 to solve real problems with CSS: no variables, no nesting, no ability to split stylesheets into logical files without HTTP overhead, limited calculation support. These were genuine limitations, and Sass (and its successors, Less, Stylus, PostCSS) addressed them.
Here's what's happened since then:
- CSS custom properties (variables): shipped 2015–2016, fully supported since 2017
- CSS nesting: shipped 2023, fully supported in all modern browsers
- CSS
@layer: shipped 2022, fully supported - CSS
calc()andclamp(): shipped 2013–2019, fully supported - CSS container queries: shipped 2023, fully supported
- CSS
color-mix(),oklch(), relative colors: shipped 2023–2024
The features that justified Sass are now in CSS. The preprocessor is solving a problem that was closed.
CSS Custom Properties (Variables)
This is the one people know but don't always use to its full potential:
:root {
/* Design tokens */
--color-primary: oklch(55% 0.2 260);
--color-primary-hover: oklch(45% 0.2 260);
--color-surface: oklch(98% 0 0);
--color-text: oklch(20% 0 0);
--space-xs: 0.25rem;
--space-sm: 0.5rem;
--space-md: 1rem;
--space-lg: 2rem;
--space-xl: 4rem;
--radius-sm: 4px;
--radius-md: 8px;
--radius-full: 9999px;
--font-body: system-ui, sans-serif;
--font-mono: ui-monospace, "Cascadia Code", monospace;
--shadow-sm: 0 1px 3px oklch(0% 0 0 / 0.1);
--shadow-md: 0 4px 6px oklch(0% 0 0 / 0.1);
}
This looks like Sass variables. It isn't. These are live values in the DOM — they can be changed by JavaScript, they inherit through the document tree, they can be scoped to elements, and they respond to media queries.
/* Scoped variables — only affect their subtree */
.card {
--card-padding: var(--space-md);
--card-radius: var(--radius-md);
}
.card.compact {
--card-padding: var(--space-sm);
}
/* Use anywhere in the card's subtree */
.card-body {
padding: var(--card-padding);
border-radius: var(--card-radius);
}
Sass variables can't do this. They're compile-time constants. CSS custom properties are runtime values that participate in the cascade. That's a categorically different thing, and it enables patterns that Sass never could:
// JavaScript can read and write CSS custom properties
const root = document.documentElement;
// Read the current value
const primary = getComputedStyle(root).getPropertyValue('--color-primary');
// Set a new theme dynamically
root.style.setProperty('--color-primary', 'oklch(55% 0.3 30)');
Theme switching without JavaScript class toggling, without injecting stylesheets, without recompilation. Just change the variable.
CSS Nesting
CSS nesting shipped in all major browsers in 2023. It looks like this:
.nav {
display: flex;
gap: var(--space-md);
padding: var(--space-sm) var(--space-lg);
background: var(--color-surface);
& a {
color: var(--color-text);
text-decoration: none;
padding: var(--space-xs) var(--space-sm);
border-radius: var(--radius-sm);
&:hover {
background: oklch(from var(--color-primary) l c h / 0.1);
color: var(--color-primary);
}
&[aria-current="page"] {
font-weight: 600;
color: var(--color-primary);
}
}
@media (max-width: 768px) {
flex-direction: column;
& a {
padding: var(--space-sm) var(--space-md);
}
}
}
The & refers to the parent selector. Media queries can be nested inside rules. This is Sass nesting syntax, largely. The difference: no compilation, no Sass. It's in the spec and in every modern browser.
One gotcha worth knowing: in CSS nesting, the & is required when nesting elements directly (unlike Sass where & a and a inside a rule are equivalent). The CSS Working Group made this explicit for parser ambiguity reasons. You need & a, not just a. Modern browsers handle both in most cases, but & is cleaner and more explicit.
CSS @layer for Specificity Management
This is the feature Sass never had. @layer gives you explicit control over the specificity cascade, solving the most common source of "why isn't my CSS winning" — without !important.
/* Define layer order — lower layers lose to higher layers */
@layer reset, base, components, utilities;
@layer reset {
*, *::before, *::after {
box-sizing: border-box;
}
body { margin: 0; }
}
@layer base {
body {
font-family: var(--font-body);
color: var(--color-text);
background: var(--color-surface);
}
h1, h2, h3, h4 {
line-height: 1.2;
}
}
@layer components {
.button {
display: inline-flex;
align-items: center;
padding: var(--space-sm) var(--space-md);
background: var(--color-primary);
color: white;
border: none;
border-radius: var(--radius-md);
cursor: pointer;
font-weight: 500;
&:hover {
background: var(--color-primary-hover);
}
}
}
@layer utilities {
.visually-hidden {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0, 0, 0, 0);
border: 0;
}
}
Any rule in utilities wins over any rule in components regardless of specificity. A class selector in utilities beats an ID selector in base. The layer order defines the winner, not the selector weight.
This solves one of the oldest problems in CSS architecture: how to include a third-party stylesheet without its high-specificity selectors overriding your own. Wrap it in a layer:
@layer third-party;
@import url('./vendor/some-library.css') layer(third-party);
/* Your styles are in a higher layer and always win */
@layer components {
.button {
/* This overrides .button from third-party, regardless of specificity */
background: var(--color-primary);
}
}
ITCSS, BEM, OOCSS, SMACSS — these methodologies existed to manage cascade order in the absence of @layer. With @layer, you don't need the methodology. You write the layer order declaration once and your cascade is explicitly controlled.
calc(), clamp(), and Modern CSS Math
:root {
/* Fluid typography — scales between viewport sizes without media queries */
--text-sm: clamp(0.875rem, 1vw + 0.5rem, 1rem);
--text-base: clamp(1rem, 1.2vw + 0.5rem, 1.25rem);
--text-lg: clamp(1.125rem, 1.5vw + 0.5rem, 1.5rem);
--text-xl: clamp(1.5rem, 3vw + 0.5rem, 2.5rem);
/* Fluid spacing */
--space-section: clamp(3rem, 8vw, 8rem);
}
clamp(min, preferred, max) — the preferred value grows with the viewport, but is clamped between min and max. No JavaScript, no media query breakpoints for font size. The typography is fluid across all viewport widths.
/* Complex layout calculations */
.sidebar-layout {
display: grid;
grid-template-columns: min(30%, 320px) 1fr;
gap: var(--space-md);
}
/* Dynamic padding that accounts for scrollbar width */
.content {
padding-inline: max(var(--space-md), calc((100vw - 1200px) / 2));
}
min(), max(), clamp(), calc() — these handle what Sass's math.div() and percentage calculations attempted to handle, and they do it at runtime with access to actual viewport dimensions.
Container Queries: The Big One
Media queries respond to the viewport. Container queries respond to the element's container — which is what component-based design actually needs.
/* Define a containment context */
.card-grid {
container-type: inline-size;
container-name: grid;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
gap: var(--space-md);
}
/* Card styles based on its container, not the viewport */
.card {
padding: var(--space-md);
@container grid (min-width: 600px) {
display: grid;
grid-template-columns: auto 1fr;
gap: var(--space-sm);
}
@container grid (min-width: 900px) {
& .card-meta {
display: flex;
gap: var(--space-sm);
}
}
}
A card that's narrow (because it's in a narrow column) displays differently from a card that's wide (because it's in a wide column). This is not possible with media queries — media queries don't know anything about the card's container. This is the feature that makes proper responsive components possible without JavaScript measurement.
Relative Colors and oklch()
Sass had darken(), lighten(), saturate(). CSS now has relative colors:
:root {
--brand: oklch(55% 0.2 260);
/* Automatically derived variants */
--brand-light: oklch(from var(--brand) calc(l + 0.15) c h);
--brand-dark: oklch(from var(--brand) calc(l - 0.15) c h);
--brand-muted: oklch(from var(--brand) l calc(c * 0.5) h);
--brand-complement: oklch(from var(--brand) l c calc(h + 180));
}
oklch() is a perceptually uniform color space — colors that are numerically equidistant are also visually equidistant. Darken by 15% lightness and you get a consistently darker shade regardless of hue. Compare this to HSL where "darken by 15%" produces inconsistent results across different hues.
color-mix() for blending:
.muted-text {
color: color-mix(in oklch, var(--color-text) 50%, var(--color-surface));
}
Browser support: Chrome 111+, Firefox 113+, Safari 16.4+. If you're writing CSS today and targeting modern browsers, you have color functions that make Sass's color manipulation look like what it was — a workaround for a missing language feature.
@import and File Organization
Sass's @import (and the newer @use/@forward) allowed splitting CSS into multiple files that got compiled into one. Native CSS @import works for this too, though with different performance characteristics:
/* styles.css */
@import url('./reset.css');
@import url('./tokens.css');
@import url('./components/button.css');
@import url('./components/card.css');
@import url('./layout.css');
Each @import is a separate HTTP request. With HTTP/2, this is tolerable for small numbers of imports. For development, it's fine — you see the actual files in DevTools, source maps are your actual source, and there's no compilation delay.
For production, if you have many CSS files and care about the cascade of @import requests, concatenate them. This is simpler than a Sass compilation: cat reset.css tokens.css components/*.css layout.css > styles.min.css. One shell command, no build tool.
Or use PostCSS with only the postcss-import plugin — which processes @import and outputs concatenated CSS. No Sass, no preprocessor, just file concatenation.
A Complete Design System Without Preprocessors
Here's a practical design system in plain CSS:
/* tokens.css — Single source of truth for design decisions */
:root {
/* Color palette in oklch for perceptual uniformity */
--palette-blue-5: oklch(95% 0.05 260);
--palette-blue-10: oklch(90% 0.08 260);
--palette-blue-50: oklch(55% 0.2 260);
--palette-blue-60: oklch(45% 0.2 260);
--palette-blue-90: oklch(25% 0.15 260);
--palette-neutral-0: oklch(100% 0 0);
--palette-neutral-5: oklch(97% 0 0);
--palette-neutral-10: oklch(93% 0 0);
--palette-neutral-30: oklch(75% 0 0);
--palette-neutral-60: oklch(45% 0 0);
--palette-neutral-95: oklch(15% 0 0);
/* Semantic tokens — reference palette tokens */
--color-brand: var(--palette-blue-50);
--color-brand-hover: var(--palette-blue-60);
--color-surface: var(--palette-neutral-0);
--color-surface-raised: var(--palette-neutral-5);
--color-surface-sunken: var(--palette-neutral-10);
--color-border: var(--palette-neutral-10);
--color-text: var(--palette-neutral-95);
--color-text-muted: var(--palette-neutral-60);
--color-text-on-brand: white;
/* Dark mode — same tokens, different values */
@media (prefers-color-scheme: dark) {
--color-surface: var(--palette-neutral-95);
--color-surface-raised: oklch(20% 0 0);
--color-surface-sunken: oklch(12% 0 0);
--color-border: oklch(25% 0 0);
--color-text: var(--palette-neutral-5);
--color-text-muted: var(--palette-neutral-30);
}
/* Typography */
--font-sans: system-ui, -apple-system, sans-serif;
--font-mono: ui-monospace, "Cascadia Code", "Fira Code", monospace;
--text-xs: clamp(0.75rem, 0.8vw + 0.4rem, 0.875rem);
--text-sm: clamp(0.875rem, 0.9vw + 0.45rem, 1rem);
--text-base: clamp(1rem, 1.1vw + 0.5rem, 1.125rem);
--text-lg: clamp(1.125rem, 1.4vw + 0.55rem, 1.375rem);
--text-xl: clamp(1.25rem, 2vw + 0.6rem, 1.75rem);
--text-2xl: clamp(1.5rem, 3vw + 0.5rem, 2.5rem);
--text-3xl: clamp(2rem, 5vw + 0.5rem, 4rem);
/* Spacing */
--space-1: 0.25rem;
--space-2: 0.5rem;
--space-3: 0.75rem;
--space-4: 1rem;
--space-6: 1.5rem;
--space-8: 2rem;
--space-12: 3rem;
--space-16: 4rem;
/* Radii */
--radius-sm: 4px;
--radius-md: 8px;
--radius-lg: 12px;
--radius-full: 9999px;
/* Shadows */
--shadow-sm: 0 1px 2px oklch(0% 0 0 / 0.05);
--shadow-md: 0 4px 6px oklch(0% 0 0 / 0.07), 0 2px 4px oklch(0% 0 0 / 0.06);
--shadow-lg: 0 10px 15px oklch(0% 0 0 / 0.1), 0 4px 6px oklch(0% 0 0 / 0.05);
}
/* components/button.css */
.btn {
display: inline-flex;
align-items: center;
justify-content: center;
gap: var(--space-2);
padding: var(--space-2) var(--space-4);
font-family: var(--font-sans);
font-size: var(--text-sm);
font-weight: 500;
line-height: 1.25;
border-radius: var(--radius-md);
border: 1px solid transparent;
cursor: pointer;
text-decoration: none;
transition: background-color 150ms, border-color 150ms, color 150ms;
&:focus-visible {
outline: 2px solid var(--color-brand);
outline-offset: 2px;
}
/* Variants */
&.btn-primary {
background: var(--color-brand);
color: var(--color-text-on-brand);
&:hover { background: var(--color-brand-hover); }
}
&.btn-secondary {
background: var(--color-surface);
color: var(--color-text);
border-color: var(--color-border);
&:hover {
background: var(--color-surface-sunken);
border-color: var(--color-text-muted);
}
}
&.btn-ghost {
background: transparent;
color: var(--color-text);
&:hover { background: var(--color-surface-sunken); }
}
/* Sizes */
&.btn-sm {
padding: var(--space-1) var(--space-3);
font-size: var(--text-xs);
}
&.btn-lg {
padding: var(--space-3) var(--space-6);
font-size: var(--text-lg);
}
&[disabled], &:disabled {
opacity: 0.5;
cursor: not-allowed;
}
}
This design system has dark mode, fluid typography, semantic color tokens, full button variants with all interactive states, and no preprocessing. It runs in the browser as-is.
What Sass Still Does
Let's be honest about what you lose:
@mixin and @include. Sass mixins allow parameterized blocks of CSS. CSS doesn't have these. You can approximate them with custom properties (pass the parameter as a variable), but it's not the same.
@each and @for loops. Generating CSS programmatically. CSS doesn't have loops. For cases where you're generating utility classes (mt-1 through mt-16), Sass still wins.
@extend. Sharing rule sets without repetition. No native CSS equivalent.
Complex functions. Sass lets you write functions that compute values at compile time. CSS calc() is powerful but operates at runtime.
If your workflow relies heavily on generated utility classes (Tailwind-style), you're still going to want a preprocessor or PostCSS at minimum. If you're writing component styles with a design token system, you probably don't need one.
The CSS of 2024 is not the CSS of 2006. The features that justified Sass — variables, nesting, calculation, color manipulation — are now in the language. The features that Sass still does better (loops, mixins, programmatic generation) matter for certain architectures and not for others.
The default answer used to be "yes, use Sass." The honest answer now is "it depends, and the bar for needing Sass is higher than it used to be." That's progress.
HTML-First Development
What You Get for Free
The history of web development is a history of forgetting what HTML already does.
Every few years, the frontend community rediscovers something the browser has handled natively — and then builds a library around it. Form validation. Dialog elements. Accordion components. Popovers. The carousel-of-the-week on npm, which wraps <details> in 50KB of JavaScript because nobody checked whether <details> would suffice.
<details> frequently suffices.
This chapter is about reading the HTML spec before reaching for JavaScript, and what you find when you do.
The Interactive Elements You Already Have
Details and Summary: Free Accordion
<details>
<summary>What is the zero build movement?</summary>
<p>A philosophy of using native browser capabilities rather than build
tools wherever possible — ES modules, import maps, modern CSS, and
semantic HTML — instead of adding compilation steps to solve problems
the platform already handles.</p>
</details>
This renders as a clickable disclosure widget. The triangle indicator is browser-provided. The open/close animation can be customized with CSS. No JavaScript. No library. The open attribute controls the initial state:
<details open>
<summary>Expanded by default</summary>
<p>This one starts open.</p>
</details>
For CSS styling:
details {
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
padding: var(--space-4);
}
details[open] summary {
margin-bottom: var(--space-3);
border-bottom: 1px solid var(--color-border);
padding-bottom: var(--space-3);
}
summary {
cursor: pointer;
font-weight: 600;
list-style: none; /* Remove default triangle */
}
summary::after {
content: '+';
float: right;
}
details[open] summary::after {
content: '−';
}
FAQs, accordions, "show more" sections — <details> handles all of these.
Dialog: The Modal Element
The <dialog> element has been in browsers since 2022 (and Chrome/Opera since 2014). It's a proper modal with:
- Focus trapping (keyboard navigation stays within the dialog while it's open)
- The
::backdroppseudo-element for the overlay Escapekey to closeshow()/showModal()/close()methods- The
openattribute
<button id="open-btn" type="button">Open Dialog</button>
<dialog id="my-dialog">
<h2>Confirm Action</h2>
<p>Are you sure you want to proceed?</p>
<menu>
<li><button id="confirm-btn" type="button">Confirm</button></li>
<li><button id="cancel-btn" type="button">Cancel</button></li>
</menu>
</dialog>
const dialog = document.getElementById('my-dialog');
document.getElementById('open-btn').addEventListener('click', () => {
dialog.showModal(); // Opens as modal with focus trap and backdrop
});
document.getElementById('cancel-btn').addEventListener('click', () => {
dialog.close();
});
document.getElementById('confirm-btn').addEventListener('click', () => {
// Do the thing
dialog.close('confirmed'); // Can pass a return value
});
// The dialog fires a 'close' event, has a returnValue property
dialog.addEventListener('close', () => {
if (dialog.returnValue === 'confirmed') {
console.log('User confirmed');
}
});
dialog {
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
padding: var(--space-8);
max-width: 500px;
width: 90%;
box-shadow: var(--shadow-lg);
}
dialog::backdrop {
background: oklch(0% 0 0 / 0.5);
backdrop-filter: blur(4px);
}
This is a proper, accessible modal dialog — focus trap, escape-to-close, backdrop. Every "modal component" npm library is wrapping something that does less than this.
Popover API: Tooltips and Dropdowns Without JavaScript
The Popover API shipped in all major browsers in 2023. It's a mechanism for showing overlaid content — tooltips, dropdowns, command palettes — with none of the JavaScript positioning code:
<button popovertarget="user-menu">Account</button>
<menu id="user-menu" popover>
<li><a href="/profile">Profile</a></li>
<li><a href="/settings">Settings</a></li>
<li><button type="button">Sign out</button></li>
</menu>
That's it. The button with popovertarget opens and closes the element with popover. No JavaScript. The popover:
- Appears in the top layer (above everything, including modals)
- Dismisses on Escape
- Dismisses on click outside (light dismiss)
- Is accessible with proper ARIA behavior
For positioning, the CSS Anchor Positioning API (shipping 2024) positions the popover relative to the button:
#user-menu {
position-anchor: --user-menu-anchor;
position-area: bottom span-right;
margin-top: var(--space-1);
}
[popovertarget="user-menu"] {
anchor-name: --user-menu-anchor;
}
Anchor positioning is Chrome 125+; Firefox and Safari are catching up. For now, you can supplement with a small JavaScript positioning helper. But the structure — the open/close behavior, the top-layer stacking, the light dismiss — is free.
Native Form Validation
HTML5 form validation has been available for over a decade and is wildly underused:
<form id="signup-form">
<fieldset>
<legend>Create account</legend>
<label for="email">Email</label>
<input
type="email"
id="email"
name="email"
required
autocomplete="email"
>
<label for="password">Password</label>
<input
type="password"
id="password"
name="password"
required
minlength="8"
pattern="(?=.*[A-Z])(?=.*[0-9]).{8,}"
aria-describedby="password-hint"
>
<p id="password-hint">At least 8 characters, one uppercase, one number.</p>
<label for="confirm">Confirm password</label>
<input
type="password"
id="confirm"
name="confirm"
required
>
<button type="submit">Create account</button>
</fieldset>
</form>
const form = document.getElementById('signup-form');
// Only intercept to add custom cross-field validation
form.addEventListener('submit', (e) => {
const password = form.elements.password.value;
const confirm = form.elements.confirm.value;
if (password !== confirm) {
form.elements.confirm.setCustomValidity("Passwords don't match");
form.elements.confirm.reportValidity();
e.preventDefault();
return;
}
form.elements.confirm.setCustomValidity(''); // Clear error
// Form is valid — submit or handle with fetch
});
CSS styling of validation states:
input:invalid:not(:placeholder-shown) {
border-color: var(--color-error);
}
input:valid:not(:placeholder-shown) {
border-color: var(--color-success);
}
input:invalid:focus {
outline-color: var(--color-error);
}
The :not(:placeholder-shown) trick avoids showing validation errors on empty, untouched fields. This gives you "validate on change after first interaction" without JavaScript.
setCustomValidity() integrates your custom errors into the browser's native validation bubbles. reportValidity() triggers display of those bubbles. You get accessible error messaging that announces via screen readers, positioned relative to the input, without writing any accessibility code yourself.
Web Components: Custom Elements That Actually Work
Web components are four APIs that work together:
- Custom Elements: Define new HTML elements with JavaScript behavior
- Shadow DOM: Encapsulated DOM tree with scoped CSS
- HTML Templates: Inert, parseable markup for creating instances
- Declarative Shadow DOM: Server-rendered shadow DOM without JavaScript
Custom Elements without Shadow DOM are straightforward:
// Define a reusable component
class UserAvatar extends HTMLElement {
static get observedAttributes() {
return ['name', 'size', 'src'];
}
connectedCallback() {
this.render();
}
attributeChangedCallback() {
this.render();
}
render() {
const name = this.getAttribute('name') ?? 'User';
const size = this.getAttribute('size') ?? '40';
const src = this.getAttribute('src');
const initials = name.split(' ').map(n => n[0]).join('').slice(0, 2);
if (src) {
this.innerHTML = `
<img src="${src}"
alt="${name}"
width="${size}"
height="${size}"
style="border-radius:50%;width:${size}px;height:${size}px">
`;
} else {
this.innerHTML = `
<div style="
width:${size}px;height:${size}px;
border-radius:50%;
background:var(--color-brand);
color:white;
display:flex;align-items:center;justify-content:center;
font-size:${Number(size) * 0.4}px;font-weight:600;
">${initials}</div>
`;
}
}
}
customElements.define('user-avatar', UserAvatar);
<!-- Use it like any HTML element -->
<user-avatar name="Alice Chen" size="48"></user-avatar>
<user-avatar name="Bob Smith" src="/avatars/bob.jpg" size="32"></user-avatar>
This element is observable, updates when attributes change, works with innerHTML, document.createElement, and server-rendered HTML. No framework required.
With Shadow DOM:
class ToastMessage extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
}
connectedCallback() {
const { type = 'info', message } = this.dataset;
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
padding: 1rem 1.5rem;
border-radius: 8px;
font-family: system-ui, sans-serif;
}
:host([data-type="error"]) { background: #fee; border: 1px solid #fcc; }
:host([data-type="success"]) { background: #efe; border: 1px solid #cfc; }
:host([data-type="info"]) { background: #eef; border: 1px solid #ccf; }
button { float: right; background: none; border: none; cursor: pointer; }
</style>
<button aria-label="Dismiss">×</button>
<slot></slot>
`;
this.shadowRoot.querySelector('button').addEventListener('click', () => {
this.remove();
});
}
}
customElements.define('toast-message', ToastMessage);
<toast-message data-type="success">
Your changes have been saved.
</toast-message>
The CSS in the Shadow DOM is fully encapsulated — no leakage in or out. The :host pseudo-class styles the element itself from within its shadow root. <slot> is where light DOM children appear.
Declarative Shadow DOM: SSR Web Components
Declarative Shadow DOM lets you render shadow DOM from the server, without JavaScript:
<user-card>
<template shadowrootmode="open">
<style>
:host { display: flex; align-items: center; gap: 1rem; }
.info h3 { margin: 0; }
.info p { margin: 0; color: #666; }
</style>
<slot name="avatar"></slot>
<div class="info">
<slot name="name"></slot>
<slot name="role"></slot>
</div>
</template>
<img slot="avatar" src="/avatar.jpg" alt="Alice" width="48" height="48">
<h3 slot="name">Alice Chen</h3>
<p slot="role">Senior Engineer</p>
</user-card>
This renders in the browser with an encapsulated shadow DOM, no JavaScript required. If JavaScript loads and a custom element class is registered for user-card, it can add behavior without disrupting the existing rendering. If JavaScript doesn't load, the HTML renders correctly on its own.
This is the right way to think about web components: progressive enhancement at the component level. The HTML always works. JavaScript adds behavior.
<template> for Reusable Markup
The <template> element holds HTML that isn't rendered but can be cloned and inserted:
<template id="card-template">
<article class="card">
<header>
<h3 class="card-title"></h3>
<span class="card-badge"></span>
</header>
<div class="card-body"></div>
<footer class="card-footer">
<button class="card-action" type="button">View details</button>
</footer>
</article>
</template>
function createCard({ title, badge, body, onView }) {
const template = document.getElementById('card-template');
const clone = template.content.cloneNode(true);
clone.querySelector('.card-title').textContent = title;
clone.querySelector('.card-badge').textContent = badge;
clone.querySelector('.card-body').textContent = body;
clone.querySelector('.card-action').addEventListener('click', onView);
return clone;
}
// Use it
const card = createCard({
title: 'Server status',
badge: 'Healthy',
body: 'All systems operational.',
onView: () => navigate('/status'),
});
document.getElementById('dashboard').appendChild(card);
This is a render function with no framework. It clones the template (which the browser has already parsed), fills in values, attaches events, and returns a ready-to-insert DOM fragment. Fast, explicit, and debuggable.
Input Types That Aren't Text
A significant fraction of custom date pickers, color pickers, range sliders, and file uploads exist because developers didn't know the input type for these things exists:
<!-- Date picker -->
<input type="date" min="2024-01-01" max="2024-12-31">
<!-- Date and time -->
<input type="datetime-local">
<!-- Month picker -->
<input type="month">
<!-- Color picker -->
<input type="color" value="#3b82f6">
<!-- Range slider -->
<input type="range" min="0" max="100" step="5" value="50">
<!-- File upload -->
<input type="file" accept="image/*" multiple>
<!-- Search with clear button -->
<input type="search" placeholder="Search...">
type="date" gives you a native date picker. It looks different in different browsers and OSes, which is either a feature (it matches what users expect on their platform) or a limitation (it doesn't match your design system). For internal tools: feature. For consumer apps where brand consistency matters: maybe a custom component is warranted.
What HTML-First Buys You
Accessibility by default. Native HTML elements have ARIA semantics built in. <button> is keyboard-navigatable and activatable without JavaScript. <dialog> has focus management. <input type="email"> announces its purpose to screen readers. When you replace native elements with custom JavaScript widgets, you're taking on the responsibility of implementing all of this yourself — and most custom implementations miss something.
Performance. Parsing HTML is one of the fastest things browsers do. A <details> element toggling open adds no render cost. A JavaScript-powered accordion has initialization cost, event handler cost, and potential layout thrash.
Progressive enhancement. HTML works before JavaScript loads. A form with native validation works even if your validation library fails. A <dialog> can have its basic behavior provided by HTML and its enhanced behavior added by JavaScript — without either being required for the other.
Less JavaScript means less breakage. JavaScript can fail. CDNs go down. Network requests fail. Batteries die mid-load. HTML doesn't have these problems. Every line of JavaScript you replace with semantic HTML is a line that works in degraded conditions.
The question to ask before reaching for a component library isn't "which library handles this?" It's "does the browser already handle this?" The answer is yes more often than the ecosystem assumes.
Web components, the Popover API, <dialog>, <details>, native form validation — these aren't replacements for every UI library in every case. They're the floor. Know the floor before you build on top of it.
Server-Side Zero Build
Go, Deno, Bun, and Single-Binary Deployments
The zero-build movement on the server is, in some ways, older than on the client. Go has always compiled to a single binary. Rust compiles to a single binary. Even Java, for all its faults, produces a JAR that runs anywhere with a JVM. The "build step required" problem was mainly a JavaScript problem, created by Node's ecosystem of compilation and transpilation.
But even server-side JavaScript has options now. This chapter covers the full range.
Go: The Original Zero-Config Server
Go was designed with operational simplicity as an explicit goal. The result is a language where:
- There are no runtime dependencies
- The output is a single static binary
- Cross-compilation is a first-class feature built into the toolchain
- The standard library handles HTTP, JSON, templates, crypto, and most common tasks
// server.go — a complete JSON API in the standard library
package main
import (
"encoding/json"
"log"
"net/http"
"strconv"
"sync"
)
type Task struct {
ID int `json:"id"`
Title string `json:"title"`
Done bool `json:"done"`
}
var (
tasks = []Task{}
nextID = 1
mu sync.RWMutex
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("GET /api/tasks", func(w http.ResponseWriter, r *http.Request) {
mu.RLock()
defer mu.RUnlock()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tasks)
})
mux.HandleFunc("POST /api/tasks", func(w http.ResponseWriter, r *http.Request) {
var body struct {
Title string `json:"title"`
}
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
mu.Lock()
task := Task{ID: nextID, Title: body.Title}
nextID++
tasks = append(tasks, task)
mu.Unlock()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(task)
})
mux.HandleFunc("PATCH /api/tasks/{id}", func(w http.ResponseWriter, r *http.Request) {
id, err := strconv.Atoi(r.PathValue("id"))
if err != nil {
http.Error(w, "invalid id", http.StatusBadRequest)
return
}
var updates struct {
Done *bool `json:"done"`
Title *string `json:"title"`
}
json.NewDecoder(r.Body).Decode(&updates)
mu.Lock()
defer mu.Unlock()
for i, t := range tasks {
if t.ID == id {
if updates.Done != nil {
tasks[i].Done = *updates.Done
}
if updates.Title != nil {
tasks[i].Title = *updates.Title
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tasks[i])
return
}
}
http.Error(w, "not found", http.StatusNotFound)
})
log.Println("Listening on :8080")
log.Fatal(http.ListenAndServe(":8080", mux))
}
Note: the "GET /api/tasks" method-in-pattern syntax requires Go 1.22+, which shipped in February 2024.
# Run in development
go run server.go
# Build a binary
go build -o server server.go
# Cross-compile for Linux from Mac
GOOS=linux GOARCH=amd64 go build -o server-linux server.go
# The binary is self-contained — no runtime required on the target
scp server-linux user@myserver:~/
ssh user@myserver './server-linux &'
The binary is roughly 6MB for this program. It contains the Go runtime, your application code, and all dependencies. Deploy it anywhere Linux runs. No language installation, no package manager, no version conflicts.
The "build step" in Go is a single go build command that takes seconds. It doesn't require configuration, doesn't have a plugin ecosystem to manage, and produces the same output every time. This is the bar that the JavaScript build ecosystem should be measured against.
Serving Static Files Alongside Your API
A common pattern: Go serves the API and the frontend from the same process:
package main
import (
"net/http"
"log"
)
func main() {
mux := http.NewServeMux()
// API routes
mux.HandleFunc("GET /api/", apiHandler)
// Static files — everything else falls through to this
static := http.FileServer(http.Dir("./public"))
mux.Handle("/", http.StripPrefix("/", static))
// For SPAs: 404 on assets should return 404, but 404 on routes
// should return index.html. Handle this properly:
mux.HandleFunc("GET /{path...}", func(w http.ResponseWriter, r *http.Request) {
// Try to serve the file first
// If it doesn't exist and has no extension, serve index.html
http.ServeFile(w, r, "./public/index.html")
})
log.Fatal(http.ListenAndServe(":8080", mux))
}
One binary. Serves the SPA and the API. Deploys to a $5/month VPS with scp. No reverse proxy required (though one is fine if you prefer). No Docker required (though one is fine if you prefer). No Kubernetes required (definitely not required).
Bun: Fast JavaScript Without the Toolchain
Bun is a JavaScript runtime built with the goal of being fast — faster startup, faster execution, and a built-in HTTP server, test runner, bundler, and package manager.
For zero-build server development, the key properties are:
- Runs JavaScript and TypeScript natively (like Deno)
- No compilation step for TypeScript
- Fast startup time (relevant for serverless)
Bun.serve()for HTTP — no framework needed for simple cases
// server.ts — TypeScript, no compilation
interface Task {
id: number;
title: string;
done: boolean;
}
const tasks: Task[] = [];
let nextId = 1;
const server = Bun.serve({
port: 8080,
async fetch(req) {
const url = new URL(req.url);
// CORS
const headers = new Headers({
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
});
if (req.method === 'OPTIONS') {
return new Response(null, {
headers: { 'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PATCH, DELETE',
'Access-Control-Allow-Headers': 'Content-Type' }
});
}
if (url.pathname === '/api/tasks') {
if (req.method === 'GET') {
return Response.json(tasks, { headers });
}
if (req.method === 'POST') {
const body = await req.json<{ title: string }>();
const task: Task = { id: nextId++, title: body.title, done: false };
tasks.push(task);
return Response.json(task, { status: 201, headers });
}
}
const match = url.pathname.match(/^\/api\/tasks\/(\d+)$/);
if (match && req.method === 'PATCH') {
const id = parseInt(match[1]);
const index = tasks.findIndex(t => t.id === id);
if (index === -1) return Response.json({ error: 'Not found' }, { status: 404, headers });
const updates = await req.json<Partial<Task>>();
tasks[index] = { ...tasks[index], ...updates };
return Response.json(tasks[index], { headers });
}
return Response.json({ error: 'Not found' }, { status: 404, headers });
},
});
console.log(`Listening on http://localhost:${server.port}`);
bun run server.ts # Runs TypeScript directly
Bun's startup time is fast enough for serverless cold starts. Its HTTP throughput is competitive with Go for many workloads. The TypeScript support is native — no tsc, no ts-node, no compilation.
Unlike Deno, Bun has near-complete npm compatibility. If your application depends on npm packages, Bun is the path of least resistance for zero-build TypeScript server development.
Deno Revisited: Production Server Patterns
Chapter 5 covered Deno's development story. Here's what production Deno deployment looks like.
Single Binary with deno compile
deno compile \
--allow-net \
--allow-read=./public \
--allow-env \
--target x86_64-unknown-linux-gnu \
--output app-linux \
app.ts
The result is a ~70–90MB binary (it includes the Deno runtime) that runs your TypeScript application without any Deno installation on the target server. For operations teams who don't want to manage language runtimes, this is compelling.
Serving Static Files with a Deno API
// app.ts — API + static file server
import { Hono } from "jsr:@hono/hono";
import { serveStatic } from "jsr:@hono/hono/deno";
const app = new Hono();
// API routes
app.get('/api/status', (c) => c.json({ status: 'ok', time: new Date() }));
// Serve static files from ./public
app.use('/*', serveStatic({ root: './public' }));
// SPA fallback
app.get('/*', serveStatic({ path: './public/index.html' }));
Deno.serve({ port: 8080 }, app.fetch);
Deno Deploy: Global Edge with Zero Config
For applications where you want global distribution without infrastructure:
// This runs in 35+ regions worldwide with zero infrastructure config
import { Hono } from "jsr:@hono/hono";
const app = new Hono();
app.get("/api/geo", (c) => {
// Deno Deploy provides request geolocation
return c.json({
country: c.req.header("x-forwarded-for"),
region: Deno.env.get("DENO_REGION"),
});
});
Deno.serve(app.fetch);
deployctl deploy app.ts
# → Live at https://your-app.deno.dev
No EC2, no ECS, no ALB, no Route53, no CloudFront. One command. Global.
Node.js: Zero-Build Is Possible Here Too
Node has historically required compilation for TypeScript, but this is changing:
Node 22.6+ with --experimental-strip-types:
node --experimental-strip-types server.ts
Type annotations are stripped (not checked), the code runs. No tsc, no ts-node. This is experimental but shipping fast. The TypeScript checking happens in your editor via tsserver — the execution skips it.
Node 23+ with --experimental-transform-types:
node --experimental-transform-types server.ts
Adds support for TypeScript-only features (enums, namespaces, parameter properties) in addition to type stripping. More complete TypeScript support without compilation.
This matters because it means the zero-build philosophy can extend to existing Node applications — gradually, without a complete rewrite.
SQLite Without an ORM Build Step
For data persistence in zero-build server applications, SQLite is underrated. It's a file, it's fast for reads, it handles reasonable write loads, and there are native bindings in every runtime:
// Deno
import { Database } from "jsr:@db/sqlite";
const db = new Database("app.db");
db.exec(`
CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
done INTEGER DEFAULT 0,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
)
`);
const insertTask = db.prepare(
"INSERT INTO tasks (title) VALUES (?) RETURNING *"
);
const getTasks = db.prepare("SELECT * FROM tasks ORDER BY created_at DESC");
export function createTask(title: string) {
return insertTask.get(title);
}
export function listTasks() {
return getTasks.all();
}
// Go with the standard database/sql package and a SQLite driver
import (
"database/sql"
_ "modernc.org/sqlite"
)
db, _ := sql.Open("sqlite", "app.db")
db.Exec(`CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
done BOOLEAN DEFAULT FALSE
)`)
No ORM. No migration framework. No build step for the database layer. SQL is a query language — writing it directly is not a sin, and for small to medium applications it's the clearest approach.
The Operational Case for Single Binaries
The argument for single-binary deployment isn't just laziness (though it's also laziness, and laziness is sometimes engineering wisdom):
Reproducible deployments. The binary on your server is exactly the binary you tested. No "works on my machine" with runtime version mismatches, no npm install on the server that could pull different packages.
Simple rollback. Keep the previous binary. If something breaks, swap them. Your rollback is a file copy.
Minimal attack surface. A Go binary running as a non-root user, with no package manager, no package cache, no npm on the server — this is a meaningfully smaller attack surface than a Node application with a node_modules directory containing hundreds of transitive dependencies, each potentially vulnerable.
Cold start performance. For containers and serverless, startup time matters. A Go binary starts in milliseconds. A Node application starting with hundreds of required modules starts measurably slower.
Deployment simplicity. scp binary user@server:~/app && ssh user@server 'pkill app; ./app &' is a complete deployment script. It's also embarrassingly fast.
The server-side zero-build stack in 2024:
- Go: Maximum performance, minimum runtime overhead, excellent standard library, true single binary
- Deno: TypeScript native, excellent developer experience, deploys to the edge as source or compiled binary
- Bun: Fastest Node-compatible runtime, TypeScript native, good for teams with npm dependencies
- Node 22+: Gradual path to zero-build TypeScript, largest ecosystem
All four of these support serving static files, handling JSON APIs, and connecting to databases without a build step. The "compile the server" step exists in Go — it's a single command, takes seconds, and produces a deployment artifact that's fundamentally simpler than anything npm-based.
The next chapter covers something you probably assumed required a complex toolchain: testing.
Testing Without a Toolchain
Node's Built-In Test Runner Is Actually Good
The JavaScript testing ecosystem is a vivid illustration of what happens when a platform lacks a feature for long enough: the community builds five competing solutions, each with their own config format, assertion library, mock API, and opinion about where test files should live. Then the platform ships the feature natively and everyone awkwardly avoids mentioning it.
Node shipped a built-in test runner in Node 18 (experimental) and Node 20 (stable). It works. It doesn't need Jest, it doesn't need Vitest, it doesn't need Mocha, it doesn't need a test runner config file.
Node's Built-In Test Runner
// math.test.js — runs with: node --test math.test.js
import { test, describe, it, before, after, beforeEach, afterEach, mock } from 'node:test';
import assert from 'node:assert/strict';
import { add, divide, average } from './math.js';
describe('math utilities', () => {
it('adds two numbers', () => {
assert.equal(add(1, 2), 3);
assert.equal(add(-1, 1), 0);
});
it('divides correctly', () => {
assert.equal(divide(10, 2), 5);
});
it('throws on division by zero', () => {
assert.throws(() => divide(10, 0), /division by zero/);
});
it('calculates average', () => {
assert.equal(average([1, 2, 3, 4, 5]), 3);
assert.equal(average([]), 0);
});
});
// math.js
export function add(a, b) {
return a + b;
}
export function divide(a, b) {
if (b === 0) throw new Error('division by zero');
return a / b;
}
export function average(numbers) {
if (numbers.length === 0) return 0;
return numbers.reduce((sum, n) => sum + n, 0) / numbers.length;
}
node --test math.test.js
Output:
▶ math utilities
✔ adds two numbers (0.312ms)
✔ divides correctly (0.06ms)
✔ throws on division by zero (0.217ms)
✔ calculates average (0.055ms)
▶ math utilities (1.207ms)
ℹ tests 4
ℹ suites 1
ℹ pass 4
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 43.25
TAP output for CI: node --test --reporter tap. The exit code is 0 on success, non-zero on failure. Every CI system in the world understands this.
Running All Test Files
# Run all *.test.js files recursively
node --test
# Or specify a glob pattern
node --test '**/*.test.js'
# Watch mode
node --test --watch
node --test with no arguments discovers test files automatically — files matching *.test.{js,mjs,cjs} or *.spec.{js,mjs,cjs}, or files in a test directory. This is sensible, configurable, and doesn't require a config file.
Async Tests
import { test, describe, it } from 'node:test';
import assert from 'node:assert/strict';
describe('async API', () => {
it('fetches user data', async () => {
// Mock fetch for testing
const response = await fetch('https://jsonplaceholder.typicode.com/users/1');
const user = await response.json();
assert.equal(user.id, 1);
assert.ok(user.name.length > 0);
});
it('handles not found', async () => {
const response = await fetch('https://jsonplaceholder.typicode.com/users/9999');
assert.equal(response.status, 404);
});
});
Async tests work with async/await. No special wrapper, no done() callback, no timeout configuration. If the async function throws or rejects, the test fails.
Mocking
The built-in test runner has a mock API:
import { test, mock } from 'node:test';
import assert from 'node:assert/strict';
test('mocks a function', () => {
const fn = mock.fn((x) => x * 2);
assert.equal(fn(5), 10);
assert.equal(fn(3), 6);
assert.equal(fn.mock.calls.length, 2);
assert.deepEqual(fn.mock.calls[0].arguments, [5]);
});
test('mocks a method', () => {
const obj = {
greet(name) { return `Hello, ${name}`; }
};
mock.method(obj, 'greet', (name) => `Hi, ${name}!`);
assert.equal(obj.greet('Alice'), 'Hi, Alice!');
assert.equal(obj.greet.mock.calls.length, 1);
mock.restoreAll();
});
// Mocking modules (Node 22+)
test('mocks a module import', async (t) => {
t.mock.module('./database.js', {
namedExports: {
getUser: () => ({ id: 1, name: 'Test User' }),
}
});
const { getUser } = await import('./database.js');
const user = getUser(1);
assert.equal(user.name, 'Test User');
});
Module mocking is available in Node 22+. For older Node versions, you can inject dependencies through function parameters instead of module-level imports, which makes mocking unnecessary:
// Instead of this (hard to mock):
import { db } from './database.js';
export function getUser(id) {
return db.query('SELECT * FROM users WHERE id = ?', [id]);
}
// Do this (easy to test):
export function getUser(id, db) {
return db.query('SELECT * FROM users WHERE id = ?', [id]);
}
// In tests:
const mockDb = { query: () => ({ id: 1, name: 'Alice' }) };
const user = getUser(1, mockDb);
Dependency injection is testable without mocking infrastructure. This isn't always possible, but it's worth preferring when it is.
Code Coverage
node --test --experimental-test-coverage
Output:
─────────────────────────────────────────────────────────────────
File │ Line % │ Branch % │ Function %
─────────────────────────────────────────────────────────────────
math.js │ 100.00 │ 100.00 │ 100.00
─────────────────────────────────────────────────────────────────
No Istanbul, no nyc, no additional configuration. The flag is --experimental-test-coverage, which will become stable as the API matures.
Deno's Test Runner
Deno has a native test runner that's arguably more polished:
// math.test.ts
import { assertEquals, assertThrows } from "jsr:@std/assert";
import { add, divide, average } from "./math.ts";
Deno.test("adds two numbers", () => {
assertEquals(add(1, 2), 3);
assertEquals(add(-1, 1), 0);
});
Deno.test("divides correctly", () => {
assertEquals(divide(10, 2), 5);
});
Deno.test("throws on division by zero", () => {
assertThrows(() => divide(10, 0), Error, "division by zero");
});
// Async test
Deno.test("fetches data", async () => {
const response = await fetch("https://api.example.com/health");
assertEquals(response.status, 200);
});
// Test with setup/teardown
Deno.test({
name: "database operations",
async fn() {
const db = await openTestDatabase();
try {
await db.execute("INSERT INTO users (name) VALUES ('Alice')");
const user = await db.queryOne("SELECT * FROM users WHERE name = 'Alice'");
assertEquals(user.name, "Alice");
} finally {
await db.close();
}
},
});
deno test # Run all test files
deno test math.test.ts # Run specific file
deno test --watch # Watch mode
deno test --coverage # Coverage report
deno test --doc # Test doc examples
Deno's --doc flag is particularly interesting: it runs code examples from JSDoc comments as tests, which keeps documentation in sync with behavior.
Testing Browser Code Without a Framework
The question "how do I test browser code without Jest/Vitest/webpack" has several answers.
Pure Logic: Test in Node
If your browser code contains pure logic — functions that take values and return values — test those in Node. Don't test DOM manipulation in Node. Test logic there, DOM behavior in a browser.
// utils/date.js — pure functions, testable in Node
export function formatRelativeTime(date, now = new Date()) {
const diff = now - new Date(date);
const seconds = Math.floor(diff / 1000);
const minutes = Math.floor(seconds / 60);
const hours = Math.floor(minutes / 60);
const days = Math.floor(hours / 24);
if (seconds < 60) return 'just now';
if (minutes < 60) return `${minutes}m ago`;
if (hours < 24) return `${hours}h ago`;
return `${days}d ago`;
}
// utils/date.test.js
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { formatRelativeTime } from './date.js';
const now = new Date('2024-01-15T12:00:00Z');
test('returns "just now" for recent times', () => {
const thirtySecondsAgo = new Date(now - 30000);
assert.equal(formatRelativeTime(thirtySecondsAgo, now), 'just now');
});
test('returns minutes for < 1 hour', () => {
const tenMinutesAgo = new Date(now - 600000);
assert.equal(formatRelativeTime(tenMinutesAgo, now), '10m ago');
});
test('returns days for > 24 hours', () => {
const twoDaysAgo = new Date(now - 172800000);
assert.equal(formatRelativeTime(twoDaysAgo, now), '2d ago');
});
No bundler. No DOM. Just functions and assertions.
DOM Testing: Playwright and the Real Browser
For tests that need a real browser — component rendering, interaction, accessibility — skip the JSDOM simulation and use Playwright:
// tests/task-list.spec.js
import { test, expect } from '@playwright/test';
test.beforeEach(async ({ page }) => {
await page.goto('http://localhost:8080');
});
test('can add a task', async ({ page }) => {
await page.fill('[data-testid="new-task-input"]', 'Write tests');
await page.click('[data-testid="add-task-button"]');
const tasks = page.locator('[data-testid="task-item"]');
await expect(tasks).toHaveCount(1);
await expect(tasks.first()).toContainText('Write tests');
});
test('can mark a task as done', async ({ page }) => {
// Add a task first
await page.fill('[data-testid="new-task-input"]', 'Test task');
await page.click('[data-testid="add-task-button"]');
const checkbox = page.locator('[data-testid="task-checkbox"]').first();
await checkbox.check();
await expect(checkbox).toBeChecked();
});
test('has no accessibility violations', async ({ page }) => {
const results = await page.accessibility.snapshot();
// Basic check — Playwright also has axe integration
expect(results).toBeTruthy();
});
npx playwright install # One-time browser download
npx playwright test
Playwright tests your actual application in real browsers. JSDOM is a simulation that gets the details wrong in ways that matter. Real browsers don't.
The "zero build" here is your application — Playwright is a testing tool, and that's allowed. You're not compiling your application to test it; you're serving it and letting a browser interact with it.
Browser-Native Test Utilities
If you want to run tests in the browser itself — useful for testing Web APIs that Node can't simulate:
<!-- test-runner.html -->
<!DOCTYPE html>
<html>
<head>
<title>Tests</title>
</head>
<body>
<div id="results"></div>
<script type="module">
// Minimal test runner — no dependencies
const results = [];
async function test(name, fn) {
try {
await fn();
results.push({ name, passed: true });
} catch (e) {
results.push({ name, passed: false, error: e.message });
}
}
function assert(condition, message = 'Assertion failed') {
if (!condition) throw new Error(message);
}
// Tests
await test('localStorage works', () => {
localStorage.setItem('test', 'value');
assert(localStorage.getItem('test') === 'value');
localStorage.removeItem('test');
});
await test('CSS custom properties compute correctly', () => {
const el = document.createElement('div');
el.style.setProperty('--test-val', '42px');
document.body.appendChild(el);
const computed = getComputedStyle(el).getPropertyValue('--test-val').trim();
assert(computed === '42px', `Expected '42px', got '${computed}'`);
el.remove();
});
// Render results
const container = document.getElementById('results');
for (const result of results) {
const div = document.createElement('div');
div.textContent = `${result.passed ? '✔' : '✘'} ${result.name}`;
div.style.color = result.passed ? 'green' : 'red';
if (result.error) div.title = result.error;
container.appendChild(div);
}
</script>
</body>
</html>
This is 50 lines that run in any browser. For testing browser APIs that can't be simulated in Node, it's entirely functional.
What the Test Toolchain Actually Provides
It's worth being honest about what Jest and Vitest give you that the Node built-in doesn't:
| Feature | Node built-in | Jest/Vitest |
|---|---|---|
| Basic test runner | Yes | Yes |
| async/await | Yes | Yes |
| Mocking | Yes (Node 22+) | More ergonomic |
| Snapshots | No | Yes |
| Coverage | Yes (experimental) | More polished |
| Watch mode | Yes | Yes |
| TypeScript support | No | Yes (with plugins) |
| JSDOM | No | Yes (optional) |
| Parallel execution | Yes | Yes |
| CI integration | Yes | Yes |
The Node built-in runner lacks TypeScript support natively (use Deno if you need that) and snapshot testing. If you use snapshots extensively, you'll want Jest/Vitest. If you don't, you may not need them.
Vitest is significantly faster than Jest for the same test suite and doesn't require a separate Babel/TypeScript compilation step if you're already using Vite. If you do want a test framework, Vitest is the current recommendation — but it's worth running your tests with node --test first and seeing whether the built-in runner is sufficient before adding the dependency.
The testing story for zero-build applications: pure logic tests in node --test, browser interaction tests in Playwright against your running application, Deno's test runner if you're on Deno. None of this requires Jest, Babel, webpack, or a test configuration file.
The next chapter covers deployment — where the zero-build story becomes genuinely simple.
Deploying Zero Build Apps
Static Hosting, CDNs, and the Absence of CI Complexity
The deployment story for a zero-build application is short. Pleasantly, surprisingly short. No compilation step means no build step in CI. No build step in CI means no waiting for webpack to warm up. No webpack warmup means your CI pipeline goes from "four minutes on a good day" to "the time it takes to rsync files to a server."
Let's walk through the actual options.
What You're Deploying
A zero-build frontend application is a directory. It looks like this:
public/
├── index.html
├── styles.css
├── app.js
├── router.js
├── api.js
├── components/
│ ├── header.js
│ ├── dashboard.js
│ └── user-card.js
└── assets/
├── logo.svg
└── favicon.ico
This directory goes to a static host. Any static host. The technology doesn't matter. The files don't need to be processed. Netlify, Vercel, Cloudflare Pages, GitHub Pages, S3 + CloudFront, a VPS with Caddy, a Raspberry Pi with nginx — all of these work, and all of them work the same way.
GitHub Pages
For projects already on GitHub, Pages is free and requires one YAML file:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
pages: write
id-token: write
steps:
- uses: actions/checkout@v4
# No build step. Upload the source directly.
- uses: actions/upload-pages-artifact@v3
with:
path: ./public
- id: deployment
uses: actions/deploy-pages@v4
This workflow runs in under 30 seconds. There is no step labeled "Build" because there is no build. The time is spent on: checking out the repository and uploading the files. That's it.
If you're deploying from a repository root (no public/ subdirectory), change path: ./public to path: . and add a .nojekyll file to prevent GitHub Pages from trying to interpret your site as a Jekyll site.
Netlify
Netlify's default configuration assumes a build step. For zero-build apps, you override it:
Option 1: netlify.toml in the repository root:
[build]
publish = "public"
command = "" # No build command
[build.environment]
NODE_VERSION = "20" # If you have any Node scripts
[[headers]]
for = "/*"
[headers.values]
X-Content-Type-Options = "nosniff"
X-Frame-Options = "DENY"
Referrer-Policy = "strict-origin-when-cross-origin"
# SPA routing: all paths serve index.html
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
Option 2: Drag and drop. Drop your public/ directory on app.netlify.com. It deploys instantly. This is not a joke — it's the fastest way to share a working prototype.
Netlify gives you HTTPS, global CDN distribution, preview deployments for pull requests, and form handling — all free on the starter plan, all requiring zero infrastructure configuration from you.
Vercel
Similar to Netlify. Create vercel.json:
{
"outputDirectory": "public",
"buildCommand": "",
"rewrites": [
{ "source": "/((?!api/).*)", "destination": "/index.html" }
],
"headers": [
{
"source": "/api/(.*)",
"headers": [
{ "key": "Cache-Control", "value": "no-store" }
]
},
{
"source": "/(.*)\\.js",
"headers": [
{ "key": "Content-Type", "value": "application/javascript" }
]
}
]
}
The rewrites rule handles SPA routing: any request that doesn't start with /api/ serves index.html, letting your client-side router take over.
Cloudflare Pages
Cloudflare Pages is fast — genuinely, measurably fast. Content is served from Cloudflare's network, which means most users get their content from a datacenter 10–50ms away.
In the Cloudflare Pages dashboard:
- Build command: (leave empty)
- Build output directory:
public
Or connect your GitHub repo and configure nothing else. Cloudflare detects that there's nothing to build.
For edge functions alongside static files:
// functions/api/tasks.js — Cloudflare Pages Function
export async function onRequestGet({ env }) {
const tasks = await env.TASKS_KV.list();
return Response.json(tasks);
}
export async function onRequestPost({ request, env }) {
const body = await request.json();
const id = crypto.randomUUID();
await env.TASKS_KV.put(id, JSON.stringify({ ...body, id }));
return Response.json({ id, ...body }, { status: 201 });
}
Cloudflare Pages Functions run at the edge, globally, with sub-millisecond cold starts. The function code doesn't require a build step — Cloudflare deploys the JavaScript directly.
S3 + CloudFront: The Enterprise Option
For organizations that need AWS, the zero-build deployment is simpler than it looks:
# GitHub Actions workflow for S3 + CloudFront
name: Deploy to S3
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Sync to S3
run: |
aws s3 sync ./public s3://${{ secrets.S3_BUCKET }} \
--delete \
--cache-control "max-age=31536000,immutable" \
--exclude "*.html" \
--exclude "importmap.json"
# HTML and import maps: no caching (they reference versioned assets)
aws s3 sync ./public s3://${{ secrets.S3_BUCKET }} \
--exclude "*" \
--include "*.html" \
--include "importmap.json" \
--cache-control "no-cache"
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CF_DISTRIBUTION_ID }} \
--paths "/*.html" "/importmap.json"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
The caching strategy: JavaScript files and assets are cached aggressively (max-age=31536000,immutable) because they don't change — if you update a file, you rename it or add a content hash. HTML and import maps are never cached because they reference everything else.
For a zero-build app without a bundler providing content hashing, a simple approach is adding a query parameter version to your JS imports in index.html:
<!-- index.html — update VERSION on each deploy -->
<script type="importmap">
{
"imports": {
"preact": "https://esm.sh/preact@10.22.1",
"preact/hooks": "https://esm.sh/preact@10.22.1/hooks"
}
}
</script>
<script type="module" src="./app.js?v=2024-11-15"></script>
Or automate it:
# In your deploy script, replace the version with the git commit hash
VERSION=$(git rev-parse --short HEAD)
sed -i "s/app\.js?v=[^\"']*/app.js?v=$VERSION/" public/index.html
Self-Hosted with Caddy
For complete control, Caddy is a zero-config web server with automatic HTTPS:
# Caddyfile
example.com {
root * /var/www/myapp
file_server
# SPA routing — try files, fall back to index.html
try_files {path} /index.html
# Cache headers
@immutable {
path *.js *.css *.svg *.ico *.woff2
}
header @immutable Cache-Control "max-age=31536000, immutable"
header *.html Cache-Control "no-cache"
# Security headers
header {
X-Content-Type-Options nosniff
X-Frame-Options DENY
Referrer-Policy strict-origin-when-cross-origin
}
}
sudo caddy run # Automatic HTTPS, HTTP/2, and HTTP/3
Caddy handles TLS certificate renewal automatically. HTTP/2 means modules load efficiently. The Caddyfile above is the entire server configuration — no nginx.conf with 200 lines of best-practices boilerplate.
Deployment to a VPS:
# On a fresh Ubuntu server
apt install -y caddy
mkdir -p /var/www/myapp
rsync -av --delete ./public/ user@server:/var/www/myapp/
Total configuration files: one Caddyfile. Total dependencies on the server: Caddy (a single binary).
Content Security Policy for Native ESM
A zero-build application using CDN imports needs a Content Security Policy that allows those CDN origins:
<meta http-equiv="Content-Security-Policy" content="
default-src 'self';
script-src 'self' https://esm.sh 'wasm-unsafe-eval';
connect-src 'self' https://api.example.com;
style-src 'self' 'unsafe-inline';
img-src 'self' data: https:;
">
Or in HTTP headers (preferred):
Content-Security-Policy: default-src 'self'; script-src 'self' https://esm.sh; connect-src 'self' https://api.example.com;
If you self-host all your dependencies, your CSP is simpler:
Content-Security-Policy: default-src 'self';
This is an argument for self-hosting your vendor files: better security posture, simpler CSP, no CDN dependency. The trade-off is that you maintain the files.
MIME Types: The One Thing You Have to Check
Browsers require JavaScript modules to be served with Content-Type: application/javascript. Most web servers do this correctly by default. The one case where it goes wrong: object storage (S3, R2, GCS) where you uploaded the files without explicit MIME types.
If your modules fail to load with "incorrect MIME type" errors, check your server's headers:
curl -I https://example.com/app.js | grep content-type
# Should see: content-type: application/javascript
S3 typically gets this right if you upload via the console or aws CLI. If you're using a custom sync script, add explicit content type:
aws s3 cp app.js s3://bucket/app.js \
--content-type "application/javascript"
The CI/CD Pipeline for Zero-Build Apps
Here's a complete CI/CD pipeline for a zero-build application — frontend and a Deno backend:
name: CI/CD
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run frontend tests
run: node --test 'src/**/*.test.js'
- name: Run backend tests
uses: denoland/setup-deno@v1
with:
deno-version: v1.x
- run: deno test --allow-net api/
deploy:
needs: test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Compile backend binary
uses: denoland/setup-deno@v1
with:
deno-version: v1.x
- run: |
deno compile \
--allow-net --allow-read --allow-env \
--target x86_64-unknown-linux-gnu \
--output api-server \
api/main.ts
- name: Deploy frontend to S3
run: aws s3 sync ./public s3://${{ secrets.S3_BUCKET }} --delete
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Deploy API binary to server
run: |
scp api-server ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }}:~/
ssh ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }} \
'pkill api-server || true; ./api-server &'
This pipeline:
- Tests the frontend with Node's built-in test runner
- Tests the backend with Deno's test runner
- Compiles the backend to a single Linux binary
- Syncs the frontend to S3
- Deploys the compiled binary via SSH
Total build tools: node --test (built-in), deno test (built-in), deno compile (built-in), aws s3 sync (AWS CLI). No webpack, no Vite, no Babel, no Rollup, no PostCSS. The CI runs in about 90 seconds.
Zero-build deployment is not a compromise. It's often better than build-heavy deployment: faster CI, simpler configuration, fewer failure modes, and deployments that are trivially reversible (the previous artifact still exists; swap it back). The complexity of a build pipeline is technical debt. Every step is something that can break, something that needs maintenance, something that a new team member needs to understand.
The deploy pipeline you don't have is the one that never breaks at 4pm on a Friday.
When You Actually Need a Build System
Be Honest With Yourself
Every book advocating for a simpler approach risks becoming a religion. The zero-build approach is not a religion. It's a tool appropriate for specific contexts, inappropriate for others, and the whole point is to make that distinction consciously rather than reflexively.
This chapter is where we draw the line honestly.
When You Actually Need a Build System
1. Your TypeScript Uses TypeScript-Only Features
JSDoc types give you type inference in VS Code and type checking with tsc --noEmit without a build step. But JSDoc doesn't support every TypeScript construct:
- Enums: TypeScript
enumdoesn't exist in JavaScript. JSDoc can't express them. - Namespaces: The
namespacekeyword is TypeScript-only. - Decorators (the TypeScript/experimental form): Not yet in JavaScript.
declareblocks: Ambient declarations for typing external things.- Parameter properties:
constructor(private name: string)— syntactic sugar that doesn't translate to JavaScript.
If you use these features, you need a TypeScript compilation step. The alternatives are:
- Use JSDoc where possible and accept the limitations
- Use Deno, which runs TypeScript natively
- Use Node 22's
--experimental-strip-typesfor simple cases - Use a build step for TypeScript and accept that trade-off
2. Your Application Has a Very Large Module Graph
This is the performance argument from Chapter 3, stated plainly.
If your application has 200+ modules in a graph several levels deep, the module loading waterfall will produce a measurably worse initial load time than a bundled build. The cascade of round trips adds latency that modulepreload mitigates but doesn't fully solve.
The threshold isn't precise, but a rough heuristic:
- < 50 modules: Native ESM is fine. Waterfall is not perceptible on any reasonable connection.
- 50–150 modules: With
modulepreloadfor the critical path, native ESM is acceptable. - 150+ modules: The waterfall is real. Bundling the initial load path makes sense.
The right question: who are your users and what are their connections? An internal tool used over a corporate LAN is different from a consumer app serving mobile users on 4G.
If you're in the "might need bundling" range, consider a hybrid approach: bundle only for production, develop unbundled. This is exactly what Vite does — Rollup-based bundling for production, native ESM dev server. You get both. The build step is confined to CI, not to your development loop.
3. You Need Tree Shaking for Bundle Size Reasons
Tree shaking requires static analysis of the module graph to determine which exports are actually used. Bundlers do this. Browsers don't.
If you import a large library and only use a fraction of it, you're shipping the whole thing without a bundler:
// Without tree shaking, you get all of lodash-es (~130KB minified/gzipped)
import { debounce } from 'lodash-es';
// With tree shaking in a bundler, you get just debounce (~2KB)
The mitigating factors:
- CDNs like esm.sh do server-side tree shaking for some packages
- Modern browsers cache aggressively — if lodash-es is cached from another visit, the cost is near-zero
- 130KB gzipped is ~36KB — noticeable but not catastrophic for many applications
If you're serving millions of users and every kilobyte is measurable in conversion rates, you want tree shaking. If you're building an internal tool with 50 users on a fast network, this is not your bottleneck.
4. You're Using JSX
JSX is not JavaScript. <Component prop="value" /> is a syntax error in a browser. To use JSX-based frameworks (React, Preact with JSX, Solid), you need a compilation step that transforms JSX to createElement calls.
The alternatives that don't require compilation:
- HTM (from the Preact team):
html\<${Component} />`` — tagged template literal JSX - Preact with
hfunction calls directly:h(Component, { prop: 'value' }) - Lit for web components: no JSX, template literals, no compilation required
For most applications that have adopted React for pragmatic reasons rather than JSX love, Preact + HTM is a viable replacement with an almost identical API and zero build step.
If your team is deeply invested in the React ecosystem (React DevTools, React-specific hooks and patterns, large numbers of React-specific npm packages), switching is a real cost. Maintain your React application, use Vite, and accept the build step. That's the honest answer.
5. You Need Dead Code Elimination for Security
If your codebase contains development-only code, admin-only code, or feature-flag-disabled code that shouldn't ship to production users for security reasons, you need a build step that removes it.
The hostname-check trick (if (location.hostname === 'localhost')) works for developer tooling but is unreliable for security-sensitive code. A determined user can change their host header or set an arbitrary hostname. Code that's shipped to the client is always inspectable.
If you have code that reveals business logic, internal admin capabilities, or security-sensitive behavior, that code needs to not exist in the production build — which means a build step to eliminate it.
6. You're Using CSS Preprocessors for Loops/Mixins
The CSS chapter covered what native CSS can do. Here's what it still can't do:
// Sass — no native CSS equivalent
@mixin truncate($lines) {
overflow: hidden;
display: -webkit-box;
-webkit-line-clamp: $lines;
-webkit-box-orient: vertical;
}
@for $i from 1 through 12 {
.col-#{$i} {
grid-column: span #{$i};
}
}
Generating utility classes programmatically, parameterized mixins for shared patterns, complex conditionals in CSS — Sass still does these. Native CSS custom properties can approximate some of this, but not all.
If your CSS architecture depends on generated utility classes (you're building a design system, or using Tailwind), PostCSS or Sass is the right tool.
If your CSS architecture is component-scoped styles with design tokens, native CSS is likely sufficient.
7. You Need to Support Older Browsers
The zero-build approach targets modern browsers. "Modern" in 2024 means:
- Chrome/Edge 89+
- Firefox 108+
- Safari 15.4+
These browsers are a combined 95%+ of global web traffic. The remaining percentage includes:
- Internet Explorer: Extinct (Microsoft ended support June 2022)
- Chrome/Firefox/Safari versions 3+ years old: Rare but possible in enterprise, embedded, and certain regional markets
If your application has requirements to support browsers outside this range, you need transpilation. tsc, Babel, or SWC to compile down to older JavaScript feature sets. This is a real requirement for some applications and not something to dismiss.
The question to ask your stakeholders is not "can we drop IE11 support?" (IE11 is dead). It's "what's the actual browser usage distribution for our users?" Look at your analytics. If 2% of users are on iOS 14 and your application doesn't work there, that's a problem. If 0.1% are on a five-year-old Samsung browser, that's a judgment call.
8. You're Working on a Large Team with Strict Consistency Requirements
Build tools enforce consistency. A formatter that runs as part of the build fails the build for inconsistent code. A TypeScript compilation step fails for type errors. ESLint in the build pipeline blocks merges for lint failures.
You can achieve the same thing without a build pipeline — run formatters and linters in CI without bundling, use git hooks — but the organizational discipline required is real. Large teams with frequent code churn benefit from tools that make "broken code" hard to commit.
This isn't a technical reason to use a build system. It's an organizational one. That's still a valid reason.
The Spectrum of Build Complexity
Not all build systems are equal. If you decide you need some build tooling, prefer the minimum:
Level 0: No build — Pure native, as covered in this book. Appropriate for most projects.
Level 1: Single-purpose tool — tsc --noEmit for type checking, deno fmt for formatting, or csso for CSS minification. One command, one purpose, no config.
Level 2: Type stripping — node --experimental-strip-types or Deno for TypeScript without full compilation. Still no bundling.
Level 3: Light bundling — esbuild for production bundling only. Sub-second build time, zero configuration, produces optimized bundles. Keep native ESM in development.
Level 4: Vite — Native ESM dev server, Rollup production builds. The sweet spot for applications that genuinely need bundling.
Level 5: Full webpack — Full configurability, plugin ecosystem, custom transforms. Worth the complexity only when you need something Vite can't provide.
Most projects that think they need Level 5 are actually at Level 1 or 2.
The Self-Audit Questions
Before adding a build step, ask:
-
What specific problem does this build step solve? If the answer is vague ("better developer experience"), investigate whether that problem is real.
-
How many users will notice the difference? Tree shaking matters for millions of users. It doesn't matter for 50 internal users.
-
What's the maintenance cost? Every build tool is a dependency that can break. When webpack releases a breaking major version, someone on your team spends time on the upgrade.
-
Could you achieve this differently? JSX → HTM. Sass variables → CSS custom properties. Complex TypeScript → JSDoc + simpler types.
-
Is this for development or production? Development tooling (type checking, formatting) has no production cost. Production tooling (bundling, tree shaking) affects your deployment pipeline.
The honest conclusion: build systems are necessary for a meaningful fraction of web applications — specifically those with large module graphs, JSX requirements, significant TypeScript feature usage, or genuine performance constraints from bundle size.
They're unnecessary for a different, also meaningful fraction — internal tools, prototypes, small-to-medium consumer applications, documentation sites, and applications where the bottleneck isn't JavaScript bundle size.
The failure mode isn't using a build system. It's using a build system without asking whether you need one.
Real Projects, Zero Build
Case Studies That Actually Shipped
Theory is useful. Working code is more useful. This chapter covers the kinds of real applications that benefit from the zero-build approach, with enough specificity to understand the decisions made and why they held up.
The examples here are architectural patterns drawn from real categories of applications. The code runs. The trade-offs described are real.
Case Study 1: Internal Analytics Dashboard
The project: A dashboard showing business metrics for a 20-person company. Live data, multiple chart types, filterable date ranges, exportable reports. Used by 15 people, all on modern browsers, on a corporate network.
Why zero-build made sense: No external users, no IE11 requirement, small team, no npm package dependencies beyond charting.
Architecture:
dashboard/
├── index.html
├── styles.css
├── app.js
├── api.js
├── router.js
├── components/
│ ├── chart.js # Wraps a charting library
│ ├── table.js # Data table with sorting/filtering
│ ├── date-picker.js # Date range selector using native <input type="date">
│ └── export.js # CSV export
└── pages/
├── overview.js
├── revenue.js
└── engagement.js
The charting library — Recharts was considered, but it requires React. Instead: Chart.js ships as a proper ES module and works natively.
<script type="importmap">
{
"imports": {
"chart.js": "https://esm.sh/chart.js@4.4.3",
"chart.js/auto": "https://esm.sh/chart.js@4.4.3/auto"
}
}
</script>
// components/chart.js
import Chart from 'chart.js/auto';
export function createLineChart(canvas, { labels, datasets }) {
return new Chart(canvas, {
type: 'line',
data: { labels, datasets },
options: {
responsive: true,
plugins: {
legend: { position: 'top' },
},
scales: {
y: { beginAtZero: false },
},
},
});
}
export function updateChart(chart, { labels, datasets }) {
chart.data.labels = labels;
chart.data.datasets = datasets;
chart.update();
}
The data fetching layer talks to a Go API server that queries the database directly. The frontend imports the chart component when the relevant page loads:
// pages/revenue.js
import { createLineChart, updateChart } from '../components/chart.js';
import { api } from '../api.js';
export async function RevenueView(container) {
container.innerHTML = `
<div class="page-header">
<h1>Revenue</h1>
<input type="date" id="start-date" class="date-input">
<input type="date" id="end-date" class="date-input">
</div>
<canvas id="revenue-chart"></canvas>
`;
const canvas = container.querySelector('#revenue-chart');
const startInput = container.querySelector('#start-date');
const endInput = container.querySelector('#end-date');
// Default: last 30 days
const end = new Date();
const start = new Date(Date.now() - 30 * 86400000);
startInput.value = start.toISOString().slice(0, 10);
endInput.value = end.toISOString().slice(0, 10);
let chart;
async function refresh() {
const data = await api.getRevenue({
start: startInput.value,
end: endInput.value,
});
if (!chart) {
chart = createLineChart(canvas, data);
} else {
updateChart(chart, data);
}
}
startInput.addEventListener('change', refresh);
endInput.addEventListener('change', refresh);
await refresh();
}
What worked: Development was unusually fast. No build tooling to configure. Every change reloaded in the browser with a standard refresh. DevTools showed the actual source files, making debugging trivial.
What was annoying: Sharing mock data between the frontend and Go API required duplicating types — TypeScript would have helped with type-safe API responses. The team ultimately added JSDoc types and found them sufficient.
Outcome: Shipped in three weeks. Has been running for 18 months. The codebase is 2,800 lines of JavaScript. Zero npm dependencies. The CI pipeline runs in 25 seconds (tests + rsync to the server).
Case Study 2: Documentation Site with Live Examples
The project: A documentation site for an open-source library. Static pages, searchable, with interactive code examples that users can edit and run.
Why zero-build made sense: Documentation sites are inherently content-heavy and read-only. The "interactive examples" requirement is the interesting constraint.
Architecture: Server-side rendered HTML for the main content (fast, SEO-friendly), with ES modules loaded on the client for the interactive features.
<!-- The page HTML is pre-rendered markdown -->
<article class="docs-content">
<h1>Getting Started</h1>
<p>Install the library...</p>
<!-- Interactive example: enhanced by JavaScript, readable without it -->
<div class="example-container" data-code="example-1">
<pre><code id="code-example-1">
import { createStore } from 'my-library';
const store = createStore({ count: 0 });
console.log(store.get('count')); // 0
</code></pre>
<div class="example-output" aria-live="polite"></div>
<button class="run-button" type="button">Run</button>
</div>
</article>
// editor.js — loaded as a module, enhances the static content
export function initExamples() {
const containers = document.querySelectorAll('.example-container');
for (const container of containers) {
const code = container.querySelector('code');
const output = container.querySelector('.example-output');
const button = container.querySelector('.run-button');
// Make code editable
code.contentEditable = 'true';
code.spellcheck = false;
button.addEventListener('click', async () => {
const userCode = code.textContent;
output.textContent = '';
try {
// Run the code in a sandboxed blob URL
const blob = new Blob([
`const console = {
log: (...args) => self.postMessage({ type: 'log', args }),
error: (...args) => self.postMessage({ type: 'error', args })
};\n` + userCode
], { type: 'application/javascript' });
const url = URL.createObjectURL(blob);
const worker = new Worker(url, { type: 'module' });
worker.onmessage = ({ data }) => {
const line = document.createElement('div');
line.className = data.type === 'error' ? 'output-error' : 'output-line';
line.textContent = data.args.join(' ');
output.appendChild(line);
};
worker.onerror = (e) => {
const line = document.createElement('div');
line.className = 'output-error';
line.textContent = e.message;
output.appendChild(line);
worker.terminate();
};
// Clean up after 5 seconds
setTimeout(() => {
worker.terminate();
URL.revokeObjectURL(url);
}, 5000);
} catch (e) {
output.textContent = e.message;
}
});
}
}
The code runner uses Web Workers and Blob URLs to execute user-submitted code in an isolated context. No eval() in the main thread. No server round-trip. The library being documented is itself available via import map.
What worked: The progressive enhancement approach meant the documentation was readable and useful before JavaScript loaded. The interactive examples added genuine value. Zero-build meant contributors could edit documentation locally with a static file server — no toolchain to install.
What was annoying: The code editor (a contentEditable div) lacks syntax highlighting. A proper editor like CodeMirror would require a bundled dependency. The team decided that basic highlighting via CSS was sufficient for their use case.
Outcome: The site deploys to GitHub Pages via a workflow that runs in 40 seconds. Contributors open PRs, the preview deploys automatically, and reviewers can see the result without installing anything.
Case Study 3: Real-Time Collaboration Tool
The project: A shared task board for a remote team. Real-time updates via WebSocket, drag and drop, multiple users editing simultaneously.
Why zero-build made sense: The real-time requirement was served entirely by the browser's WebSocket API. The drag-and-drop requirement was served by the HTML Drag and Drop API. The remaining UI was modest enough not to need a framework.
The WebSocket client:
// realtime.js
export function createRealtimeConnection(boardId) {
const protocol = location.protocol === 'https:' ? 'wss:' : 'ws:';
const ws = new WebSocket(`${protocol}//${location.host}/ws/boards/${boardId}`);
const listeners = new Map();
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
const handlers = listeners.get(message.type) ?? [];
for (const handler of handlers) {
handler(message.payload);
}
};
return {
on(type, handler) {
if (!listeners.has(type)) listeners.set(type, []);
listeners.get(type).push(handler);
return () => {
const handlers = listeners.get(type);
const index = handlers.indexOf(handler);
if (index !== -1) handlers.splice(index, 1);
};
},
send(type, payload) {
ws.send(JSON.stringify({ type, payload }));
},
close() {
ws.close();
},
};
}
Native drag and drop:
// board.js
export function initDragAndDrop(board, onChange) {
board.addEventListener('dragstart', (e) => {
const card = e.target.closest('[data-card-id]');
if (!card) return;
e.dataTransfer.setData('text/plain', card.dataset.cardId);
e.dataTransfer.effectAllowed = 'move';
card.classList.add('dragging');
});
board.addEventListener('dragend', (e) => {
e.target.closest('[data-card-id]')?.classList.remove('dragging');
});
board.addEventListener('dragover', (e) => {
e.preventDefault();
e.dataTransfer.dropEffect = 'move';
const column = e.target.closest('[data-column-id]');
column?.classList.add('drag-over');
});
board.addEventListener('dragleave', (e) => {
const column = e.target.closest('[data-column-id]');
if (column && !column.contains(e.relatedTarget)) {
column.classList.remove('drag-over');
}
});
board.addEventListener('drop', (e) => {
e.preventDefault();
const cardId = e.dataTransfer.getData('text/plain');
const column = e.target.closest('[data-column-id]');
if (!column) return;
column.classList.remove('drag-over');
onChange({ cardId, columnId: column.dataset.columnId });
});
}
The native Drag and Drop API is verbose compared to a library like react-beautiful-dnd, but it works without any dependencies, handles touch with minor additions, and the verbosity is familiar once you've written it once.
The server: A Deno application running Hono with WebSocket support:
// main.ts
import { Hono } from "jsr:@hono/hono";
import { upgradeWebSocket } from "jsr:@hono/hono/deno";
const app = new Hono();
const connections = new Map<string, Set<WebSocket>>();
app.get('/ws/boards/:boardId', upgradeWebSocket((c) => {
const boardId = c.req.param('boardId');
return {
onOpen(_, ws) {
if (!connections.has(boardId)) connections.set(boardId, new Set());
connections.get(boardId)!.add(ws.raw!);
},
onMessage(event, ws) {
// Broadcast to all other connections on this board
const message = event.data.toString();
const boardConnections = connections.get(boardId) ?? new Set();
for (const conn of boardConnections) {
if (conn !== ws.raw && conn.readyState === WebSocket.OPEN) {
conn.send(message);
}
}
},
onClose(_, ws) {
connections.get(boardId)?.delete(ws.raw!);
},
};
}));
// Serve static files
app.use('/*', serveStatic({ root: './public' }));
Deno.serve({ port: 8080 }, app.fetch);
What worked: Real-time collaboration in a few hundred lines of code. WebSocket is simple. The native Drag and Drop API worked well for desktop users.
What was annoying: Mobile drag and drop is painful with native APIs. The team added Sortable.js via import map for mobile, which is a reasonable trade-off.
Outcome: The server is a compiled Deno binary (~80MB) on a $6/month VPS. The frontend is static files on Cloudflare's CDN. Deployment is pushing a binary to the server and syncing the static directory. Total infrastructure: $6/month.
Case Study 4: A Build Tool for a Zero-Build Shop
Here's a meta-example that the universe apparently demanded be included.
A small agency builds zero-build web applications. They have a standard setup that they clone for each new project: import map, CSS tokens, component patterns. They wanted to automate the project scaffolding.
The tool itself is a Deno script:
// create-project.ts
import { parseArgs } from "jsr:@std/cli/parse-args";
import { exists } from "jsr:@std/fs";
import { join } from "jsr:@std/path";
const args = parseArgs(Deno.args, {
string: ['name', 'template'],
default: { template: 'basic' },
});
const projectName = args.name ?? args._[0]?.toString();
if (!projectName) {
console.error('Usage: deno run -A create-project.ts --name my-project');
Deno.exit(1);
}
const projectDir = join(Deno.cwd(), projectName);
if (await exists(projectDir)) {
console.error(`Directory ${projectName} already exists`);
Deno.exit(1);
}
await Deno.mkdir(projectDir, { recursive: true });
await Deno.mkdir(join(projectDir, 'src'));
await Deno.mkdir(join(projectDir, 'public'));
const indexHtml = `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>${projectName}</title>
<link rel="stylesheet" href="/styles.css">
<script type="importmap">
{
"imports": {
"preact": "https://esm.sh/preact@10.22.1",
"preact/hooks": "https://esm.sh/preact@10.22.1/hooks",
"htm/preact": "https://esm.sh/htm@3.1.1/preact"
}
}
</script>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/app.js"></script>
</body>
</html>`;
await Deno.writeTextFile(join(projectDir, 'public', 'index.html'), indexHtml);
const appJs = `import { html } from 'htm/preact';
import { render } from 'preact';
import { useState } from 'preact/hooks';
function App() {
const [count, setCount] = useState(0);
return html\`
<main>
<h1>${projectName}</h1>
<p>Count: \${count}</p>
<button onClick=\${() => setCount(c => c + 1)}>Increment</button>
</main>
\`;
}
render(html\`<\${App} />\`, document.getElementById('app'));
`;
await Deno.writeTextFile(join(projectDir, 'src', 'app.js'), appJs);
console.log(`Created ${projectName}`);
console.log(`cd ${projectName} && deno run --allow-net --allow-read jsr:@std/http/file-server public`);
# Install once
deno install -g -A create-project.ts
# Use
create-project my-new-app
cd my-new-app
deno run --allow-net --allow-read jsr:@std/http/file-server public
The tool that creates zero-build projects is itself a zero-build tool — a Deno script with no npm dependencies, no compilation, no build step.
What These Projects Have in Common
Looking across these examples:
The applications that worked well without bundling shared these properties:
- Known, bounded user bases (internal tools, small consumer apps)
- Modern browser requirements (no IE11, often internal-only)
- Modest dependency requirements (2–10 external libraries, not 50)
- Clear separation between data-fetching logic and UI rendering
- Teams comfortable reading browser APIs documentation
The trade-offs that consistently appeared:
- TypeScript types via JSDoc is workable but more verbose than TypeScript syntax
- Mobile edge cases (touch events, drag and drop) sometimes needed libraries
- Native form styling is limited in some browsers
- The lack of a module bundler means
node_modules-dependent packages require CDN adaptation
What nobody missed:
- Waiting for webpack to compile
- Debugging source maps that didn't match
- Webpack configuration files
npm installadding 200MB ofnode_modulesfor a dependency tree nobody audited
Zero-build is not a constraint. It's a starting position that eliminates a category of complexity upfront. Most projects that start there stay there. The ones that outgrow it — because they hit the module graph size limit, because they need TypeScript features, because their dependency tree requires bundling — have a clear upgrade path.
Start without the build step. Add it when you can point to the specific problem it solves.
The Build System You Keep Is the One You Can Justify
A developer on a message board once described debugging a production issue that turned out to be caused by their build tool generating slightly different output depending on which CI runner picked up the job. The fix was to add deterministic build flags. The fix for the fix was to update their build tool version. The update broke three other things. They shipped the production fix two days after finding the root cause.
The root cause, by the way, was a null check.
This is the hidden cost of build systems: not just the configuration, not just the learning curve, not just the upgrade cycles — but the surface area for complexity-induced failures that live in the gap between your source code and what actually runs.
The zero-build approach eliminates that gap. What you write is what runs. The browser executes your JavaScript files. The network delivers your CSS. There is no intermediate representation, no generated artifact, no translation layer that could introduce subtle discrepancies between development and production.
What This Book Covered
The case laid out across these chapters:
The browser's module system is real and complete. Native ES modules with static imports, dynamic imports, top-level await, import.meta.url, and live bindings — this is a full module system that has worked in every modern browser since 2017. You have been compiling your modules for a reason that stopped being true six years ago.
Import maps solve the bare specifier problem. One JSON blob in your HTML gives you the same bare-specifier ergonomics as npm, without npm. CDNs like esm.sh and jspm.io serve every major npm package as a proper ES module. You can have import { format } from 'date-fns' in the browser without a build step.
Deno runs TypeScript natively. Not "compiles TypeScript first" — runs it, directly, as source, with the Deno runtime handling type stripping transparently. The development environment and production environment are identical because they use the same runtime and the same source files.
Modern CSS makes preprocessors optional for most applications. Custom properties are runtime values that inherit and cascade, not compile-time constants. CSS nesting is in the spec and in every modern browser. @layer gives you explicit cascade control. Container queries make responsive components work correctly. The features that justified Sass now exist natively.
HTML is an underused platform. <dialog> has focus trapping and an accessible backdrop. The Popover API handles tooltips and dropdowns. Native form validation integrates with screen readers. Web components with Shadow DOM provide real encapsulation. Custom elements work in every modern browser. The amount of JavaScript that gets written to replace things HTML provides is significant and avoidable.
Go, Deno, and Bun compile to single binaries. Deployments that are one file, containing the runtime and your application code, deployable to any server without a language runtime installed. The operational simplicity of this is real.
Node's built-in test runner works. node --test, with no configuration, runs your test files, reports results, handles async, provides mocking (Node 22+), and produces TAP output for CI. Most projects don't need Jest.
Zero-build deployment is simpler than build-heavy deployment. A directory of files. Netlify, GitHub Pages, Cloudflare Pages, S3, a VPS with Caddy. No build step in CI means no build step that can fail in CI.
The Mindset Shift
The practical knowledge in this book matters less than the underlying habit it's trying to install: ask whether the platform already does the thing before reaching for a tool that does the thing.
This sounds obvious. It isn't practiced consistently. The ecosystem's defaults — Create React App, Next.js, the proliferation of scaffolding tools — start with the assumption that you need a toolchain and work backward from there. The defaults are sticky. The defaults become the baseline. The baseline stops being questioned.
Questioning the baseline: that's the movement.
Not every application benefits from the zero-build approach. Chapter 11 covered the honest exceptions. The goal isn't to never use a build system. The goal is to use a build system because you need it, with a clear understanding of what problem it solves, rather than because it was in the starter template you copied five years ago.
A Test For Your Current Stack
Open a terminal in your current project. Try to answer these questions:
- Which build steps in your pipeline could you remove without affecting the production application?
- If you wanted to add a new developer to the team, how long would it take them to get from
git cloneto a running development environment? - When did you last review your
package.jsondependencies? Which ones do you actually use? - Is your webpack config (or Vite config, or whatever config) maintained by the person who wrote it, or is it now an artifact that nobody wants to touch?
- If your build tool released a major version tomorrow, what would it cost to upgrade?
The answers to these questions are diagnostic. If they're all "fine," then your build system is well-managed and proportionate to your needs. If any of them produce a wince, you know where to look.
Where the Platform Is Going
The direction of the web platform is clearly toward reducing the build-vs-no-build distinction:
CSS Houdini gives JavaScript access to the CSS rendering pipeline — custom properties that paint, layout, and animate natively without JavaScript running in the main thread.
The CSS @scope rule and CSS nesting together approach the encapsulation that required Shadow DOM or CSS Modules.
The module declarations proposal would allow declaring multiple modules in a single file — reducing the HTTP request count without bundling.
Import maps with integrity would allow SRI hashing for import map entries, closing the security gap for CDN-hosted dependencies.
Node's native TypeScript support is shipping progressively. By the time you read this, --experimental-strip-types may be stable.
Declarative Shadow DOM is making server-side rendering of web components practical, which means the entire web component model becomes server-rendering-friendly.
The gap between "what the platform can do" and "what the ecosystem assumes requires tools" is closing. It closed significantly between 2017 and 2024. The trend continues.
The Last Thing
Build systems are not the problem. Configuration files are not the enemy. The problem is cargo-cult adoption — using tools because they're familiar, because they were in the template, because they're what you learned first, without asking whether the problem they solve is the problem you have.
The zero-build approach is, at its core, a practice of asking that question. Every time a new tool appears in your stack, ask: what does this solve? Is that problem real in my context? Is this the simplest solution to that problem?
Sometimes the answer is "yes, we need this tool." Sometimes the answer is "the browser already does this." The important thing is that you asked.
You've been carrying build tools for longer than some of them were necessary. You don't have to put them all down. But now you know which ones you're keeping by choice, and which ones you were keeping out of habit.
That distinction is the whole point.
The browser can load ES modules. It has for years. What else have you been compiling that you didn't have to?