A Brief History of KM

Knowledge management did not begin with a consulting firm's PowerPoint deck in 1995, though you could be forgiven for thinking so. The impulse to capture, organize, and transmit what humans know is as old as civilization itself. What has changed — repeatedly, dramatically, and sometimes disastrously — is the technology, the institutional context, and the prevailing theory about what knowledge is and who owns it.

This chapter traces that arc from clay tablets to large language models, with stops along the way at monasteries, factories, business schools, and the smoldering wreckage of several billion-dollar software implementations. The point is not mere historical tourism. Understanding how we got here explains why so many KM initiatives fail in the same ways, why certain debates refuse to die, and why the current moment — with its AI-driven tooling — is genuinely different from what came before.

The Ancient World: Libraries, Scribes, and the First Knowledge Workers

The earliest knowledge management systems were, quite literally, rooms full of clay tablets. The Library of Ashurbanipal at Nineveh (circa 668–627 BCE) held over 30,000 cuneiform tablets organized by subject — a classification scheme that would make any modern taxonomist nod approvingly. The tablets included medical texts, astronomical observations, legal codes, and literary works. Ashurbanipal was not merely hoarding; he was systematically collecting knowledge from across the Assyrian Empire, employing scribes to copy and catalog it.

The Library of Alexandria, founded around 283 BCE under Ptolemy II, represents perhaps the most famous ancient attempt at comprehensive knowledge management. At its peak, it held an estimated 400,000 to 700,000 scrolls. The library employed a classification system devised by Callimachus, whose Pinakes — a 120-volume catalog organized by genre and author — was essentially the first library catalog. The institution did not merely store scrolls; it attracted scholars who produced new knowledge through commentary, translation, and synthesis.

What is striking about these ancient examples is how modern their challenges were. Alexandria faced version-control problems (multiple copies of the same text with variations), metadata challenges (how to catalog works that spanned multiple genres), and political interference (successive rulers who alternately funded and neglected the institution). The library's eventual decline — a drawn-out affair spanning centuries, not the single dramatic fire of popular imagination — illustrates a lesson that recurs throughout KM history: sustaining a knowledge management initiative requires sustained institutional commitment.

The Medieval Period: Monasteries as Knowledge Engines

After the collapse of the Western Roman Empire, the locus of knowledge management in Europe shifted to monasteries. Between roughly the 6th and 12th centuries, monastic scriptoria were the primary engines of knowledge preservation and transmission. Benedictine monks, following the Rule of Saint Benedict (circa 530 CE) with its emphasis on lectio divina (sacred reading), developed sophisticated practices for copying, annotating, and organizing manuscripts.

The monastic approach to KM had several features worth noting. First, it was deeply communal — knowledge was managed within a community of practice (a term we will encounter again in Chapter 9) bound by shared values and daily routines. Second, it was conservative by design; the primary goal was preservation rather than innovation. Third, it was labor-intensive: a single manuscript could take months to copy by hand.

The founding of universities in the 12th and 13th centuries — Bologna (1088), Paris (circa 1150), Oxford (1167) — began to shift knowledge management from a monastic to a scholastic model. The quaestio method of disputation, formalized by scholars like Peter Abelard and later Thomas Aquinas, was essentially a structured knowledge-creation process: pose a question, marshal arguments for and against, and synthesize a resolution. The parallels to modern structured argumentation and decision documentation are not accidental.

Gutenberg's printing press (circa 1440) was, of course, the great disruption. It did not merely make copying cheaper; it fundamentally altered the economics and sociology of knowledge. When a single scribe could produce perhaps one book per year, knowledge was necessarily managed by institutions. When a press could produce hundreds of copies, knowledge became a commodity — and the problems shifted from preservation to discovery, curation, and quality control. Sound familiar?

The Industrial Revolution: Taylor, Efficiency, and the Separation of Knowing from Doing

The Industrial Revolution introduced a new and profoundly consequential idea about knowledge: that it could — and should — be extracted from workers and embedded in processes. Frederick Winslow Taylor's The Principles of Scientific Management (1911) argued that management's job was to study how work was done, identify the most efficient methods, and codify them as standard procedures that any worker could follow.

Taylor's approach was, in KM terms, a radical codification strategy. The knowledge of experienced workers — their craft knowledge, their tacit understanding of materials and timing — was to be made explicit, written down, and enforced through management oversight. The worker became, in Taylor's vision, an interchangeable component executing documented procedures.

The Taylorist approach achieved genuine productivity gains, and its influence persists in every standard operating procedure manual, every process flowchart, and every corporate training program. But it also introduced a pathology that haunts KM to this day: the assumption that all valuable knowledge can be captured in documents, and that once captured, it will be used. Taylor's time-and-motion studies could document the physical movements of bricklaying, but they could not capture the experienced bricklayer's intuitive sense of mortar consistency, weather effects on drying time, or the subtle cues that signal a structural problem.

The resistance to Taylorism — from organized labor, from the human relations movement inaugurated by the Hawthorne studies (1924–1932), and from later management thinkers — was in part a resistance to this reductionist view of knowledge. Elton Mayo and his colleagues demonstrated that productivity depended on social relationships and worker engagement, not merely on documented procedures. This tension between codified knowledge and tacit, socially embedded knowledge remains the central fault line in KM theory.

The Post-War Era: Drucker and the Knowledge Worker

Peter Drucker coined the term "knowledge worker" in 1959, in Landmarks of Tomorrow, and expanded the concept throughout his subsequent career. Drucker's insight was that the economy was shifting from one based on manual labor to one based on intellectual labor, and that this shift demanded entirely new management approaches.

For Drucker, the knowledge worker was fundamentally different from the industrial worker. You could not supervise knowledge work the way you supervised an assembly line, because the work happened inside people's heads. The knowledge worker owned the means of production — their expertise — and could walk out the door with it. Management's role was not to direct knowledge work but to create conditions in which it could flourish.

Drucker's framework was prescient but abstract. He identified the problem — how do you manage people whose primary output is knowledge? — without providing detailed solutions. That gap would be filled, for better and worse, by the KM movement of the 1990s.

Meanwhile, other intellectual currents were converging. Herbert Simon's work on bounded rationality and organizational decision-making (from the late 1940s onward) highlighted the cognitive limitations that shaped how knowledge was actually used in organizations. James March's exploration of organizational learning (particularly his work with Johan Olsen and Richard Cyert in the 1960s and 1970s) examined how organizations developed and retained knowledge over time — and how they forgot.

In Japan, a different tradition was developing. The quality management movement, drawing on the work of W. Edwards Deming and Joseph Juran, emphasized continuous improvement (kaizen) driven by frontline workers' knowledge. Toyota's production system, developed from the 1950s onward, was in many respects a sophisticated knowledge management system: it captured lessons learned, embedded best practices in standard work, and created mechanisms for continuous knowledge creation and refinement.

The 1990s: The KM Boom

The 1990s were the decade when knowledge management acquired its name, its consultants, its conferences, and its software vendors. Several converging forces drove this explosion.

First, the intellectual groundwork had been laid. Ikujiro Nonaka and Hirotaka Takeuchi published The Knowledge-Creating Company in 1995, introducing the SECI model (Socialization, Externalization, Combination, Internalization) that provided a theoretical framework for how organizations create and transfer knowledge. Their emphasis on the interplay between tacit and explicit knowledge — building on Michael Polanyi's philosophical work — gave KM practitioners a vocabulary for talking about what they were trying to do.

Thomas Davenport and Laurence Prusak published Working Knowledge in 1998, offering a more pragmatic, business-oriented perspective. They defined knowledge as "a fluid mix of framed experience, values, contextual information, and expert insight that provides a framework for evaluating and incorporating new experiences and information." Their taxonomy of KM projects — knowledge repositories, knowledge access and transfer, and knowledge environment — gave organizations a menu of concrete initiatives.

Karl-Erik Sveiby, working in Sweden and Australia, developed the concept of intellectual capital and methods for measuring it, arguing that an organization's most valuable assets were intangible: employee competence, internal structure (processes, systems, culture), and external structure (relationships with customers and suppliers). His The New Organizational Wealth (1997) and related work helped legitimize KM as a strategic concern rather than a mere IT project.

Second, technology made large-scale KM systems feasible. Lotus Notes (released in 1989, widely adopted in the mid-1990s) provided a platform for discussion databases, document sharing, and workflow management. Intranets, enabled by web technologies, offered a cheaper and more accessible alternative. Enterprise search engines, content management systems, and early knowledge bases proliferated. The technology was imperfect — early enterprise search was notoriously bad, and content management systems often became digital filing cabinets where knowledge went to die — but it was good enough to inspire ambitious initiatives.

Third, the consulting industry recognized a market opportunity. McKinsey, Booz Allen Hamilton, Ernst & Young, and others launched KM practices, both advising clients and implementing KM within their own firms. The consulting firms had a genuine need for KM — their product was knowledge, and they needed to prevent each engagement from starting from scratch — but they also had a commercial interest in selling KM services and software. By the late 1990s, the KM market was estimated at several billion dollars annually.

The results were mixed. Some initiatives delivered genuine value. Buckman Laboratories, a specialty chemicals company, became a celebrated case study for its K'Netix system, which connected its global workforce and demonstrably improved response time to customer inquiries. The World Bank, under James Wolfensohn's leadership, repositioned itself as a "knowledge bank" and developed extensive knowledge-sharing systems for development practitioners. BP (then British Petroleum) implemented peer assists and after-action reviews drawn from military practice, creating a culture of learning from experience.

But many KM initiatives failed, often expensively. Common failure modes included: building elaborate systems that nobody used ("if you build it, they will not necessarily come"); focusing on technology while neglecting the cultural and organizational changes required; trying to capture tacit knowledge in databases without understanding why that is fundamentally difficult; and failing to align KM with actual business needs.

The Dot-Com Bust and KM Disillusionment (2000–2005)

The bursting of the dot-com bubble in 2000–2001 did not kill knowledge management, but it wounded it severely. KM had been closely associated with the technology hype of the late 1990s, and when the hype collapsed, KM suffered guilt by association. Corporate budgets tightened, and KM programs — which had always struggled to demonstrate clear ROI — were among the first to be cut.

The disillusionment was not entirely unfair. Too many KM initiatives had been technology-driven solutions in search of problems. The pattern was depressingly consistent: a company would purchase an expensive KM platform, populate it with content during an initial burst of enthusiasm, and then watch usage decline as employees returned to their established workflows. The content grew stale, search became useless, and the platform became a digital ghost town.

The academic critique sharpened during this period as well. Researchers pointed out that much of KM practice was based on a naive "container" model of knowledge — the assumption that knowledge was a thing that could be extracted from heads, put into databases, and retrieved by others. This model, critics argued, ignored the situated, social, and practice-based nature of knowledge. You cannot capture a surgeon's expertise in a document any more than you can learn to ride a bicycle by reading a manual.

By the mid-2000s, it was common to hear pronouncements that KM was dead. These were premature. What had died was a particular, technology-centric, enterprise-software-driven vision of KM. The underlying problems — how do organizations learn, how do they retain expertise, how do they avoid repeating mistakes — had not gone away.

Web 2.0: Wikis, Blogs, and Social Knowledge (2005–2012)

The emergence of Web 2.0 technologies — wikis, blogs, social bookmarking, tagging, RSS feeds, and social networking platforms — offered a different model of knowledge management, one that was bottom-up rather than top-down, emergent rather than planned, and social rather than documentary.

Ward Cunningham had created the first wiki in 1995, but wikis entered the KM mainstream in the mid-2000s, driven in part by the spectacular success of Wikipedia (launched 2001). Wikipedia demonstrated that large-scale, high-quality knowledge bases could be built through voluntary collaboration without centralized editorial control — a result that would have seemed absurd to traditional KM practitioners. It also demonstrated the power of "many eyes" for quality control, the importance of transparent revision history, and the challenges of governing a knowledge commons.

Corporate wikis — using platforms like Confluence (released 2004), MediaWiki, and later Notion — became a popular KM tool. They addressed some of the failures of earlier KM systems by lowering the barrier to contribution, making content editable by anyone, and providing version control. But they introduced new problems: content sprawl, inconsistent quality, orphaned pages, and the "wiki gardening" burden of maintaining and organizing an ever-growing knowledge base.

Enterprise social networks — Yammer (2008), Jive, Chatter — attempted to apply the logic of Facebook and Twitter to organizational knowledge sharing. The idea was that knowledge sharing would happen naturally if you gave people social tools. Sometimes it did. Often it did not. The "build it and they will come" fallacy proved as persistent in the Web 2.0 era as in the enterprise KM era.

The concept of folksonomies — user-generated tagging systems, as opposed to top-down taxonomies — emerged from social bookmarking services like Delicious (2003) and Flickr (2004). Thomas Vander Wal coined the term "folksonomy" in 2004. Folksonomies offered flexibility and low overhead but suffered from inconsistency, ambiguity, and lack of hierarchical structure. The tension between folksonomy and taxonomy (explored in Chapter 8) remains unresolved.

Andrew McAfee's concept of "Enterprise 2.0" (2006) provided an intellectual framework for this wave, arguing that emergent social software platforms could transform organizational knowledge practices. The reality was more modest than the vision, but the Web 2.0 era left a lasting legacy: it shifted KM thinking toward participation, collaboration, and network effects, and away from the database-centric, repository-focused approach of the 1990s.

The Rise of Personal Knowledge Management (2010–2020)

While organizational KM was undergoing its Web 2.0 transformation, a parallel movement was developing around personal knowledge management (PKM). The concept was not new — Drucker had written about the individual knowledge worker's responsibility for self-management — but it gained new momentum with new tools.

The PKM movement drew on several intellectual sources. Vannevar Bush's "memex" concept (1945) — a hypothetical device for storing and linking personal knowledge — was a recurring reference point. So was the Zettelkasten method of the German sociologist Niklas Luhmann, who built a remarkable personal knowledge system of approximately 90,000 index cards over his career, producing more than 70 books and 400 articles. Luhmann's system, with its emphasis on atomic notes, cross-referencing, and emergent structure, became a touchstone for the PKM community.

Evernote (2008) was an early mainstream PKM tool, offering cloud-based note-taking with search and tagging. It was followed by a proliferation of tools with varying philosophies: OneNote (Microsoft's offering), Bear, Notion (2016, which blurred the line between PKM and team knowledge management), and eventually the tools that defined the current generation — Roam Research (2020), Obsidian (2020), and Logseq (2020).

The "tools for thought" movement, as it came to be called, represented a genuine intellectual ferment. Practitioners debated linking strategies (bidirectional links vs. hierarchical folders vs. tags), note granularity (atomic notes vs. long-form documents), and the relationship between note-taking and thinking. Sönke Ahrens's How to Take Smart Notes (2017), which popularized the Zettelkasten method for an English-speaking audience, became something of a bible for the movement.

The AI-Driven Renaissance (2020–Present)

The release of GPT-3 by OpenAI in June 2020, followed by ChatGPT in November 2022 and a rapid succession of increasingly capable models, has triggered what can fairly be called a renaissance in knowledge management — though, as with the Renaissance proper, it is accompanied by considerable upheaval and uncertainty.

AI affects KM at virtually every level. At the most basic, large language models can summarize documents, answer questions about knowledge bases, and generate first drafts of documentation — tasks that consumed enormous human effort in traditional KM programs. More profoundly, AI enables new approaches to knowledge discovery (finding connections across large corpora that no human would notice), knowledge retrieval (natural-language querying of unstructured knowledge bases), and knowledge synthesis (combining information from multiple sources into coherent summaries).

Retrieval-Augmented Generation (RAG), which combines large language models with information retrieval systems, has become a standard architecture for AI-powered knowledge management. RAG systems can query a knowledge base, retrieve relevant documents, and generate answers grounded in the organization's actual knowledge — addressing the hallucination problem that plagues standalone language models.

Vector databases and embedding models have introduced new approaches to knowledge organization that operate alongside (and sometimes replace) traditional taxonomies and keyword search. By representing documents as points in high-dimensional space, these systems can find semantically similar content even when it uses different terminology — a capability that traditional search could not match.

But the AI-driven renaissance also introduces new challenges. The ease of generating text threatens to exacerbate the content-sprawl problem that has plagued KM since the wiki era. If producing documentation becomes nearly free, the bottleneck shifts entirely to curation, quality control, and maintenance. AI-generated summaries and answers may be confident but wrong, introducing a new category of knowledge management risk. And the question of how to maintain human expertise in domains where AI can provide quick answers is genuinely unresolved.

Recurring Themes

Looking across this history, several themes recur with almost monotonous regularity.

Technology is necessary but not sufficient. Every era has produced tools that make knowledge management easier — from the printing press to Lotus Notes to large language models. Every era has also produced examples of those tools being adopted with great enthusiasm and little result. The pattern is so consistent that it qualifies as a law: any KM technology will be oversold by its vendors, over-purchased by its buyers, and under-used by its intended users.

The tacit-explicit tension never goes away. From Taylor's time studies to the SECI model to modern AI-assisted knowledge capture, every generation rediscovers that the most valuable knowledge is the hardest to articulate, and the knowledge that is easiest to document is often the least valuable. This is not a problem to be solved but a condition to be managed.

Culture eats strategy for breakfast. This phrase, often attributed to Drucker (possibly apocryphally), describes a finding that every KM practitioner eventually confronts: no system, no matter how well designed, will succeed if the organizational culture does not support knowledge sharing. Incentive structures, trust, leadership commitment, and social norms matter more than technology choices.

KM oscillates between centralization and decentralization. The 1990s favored centralized repositories and controlled vocabularies. The Web 2.0 era favored wikis, tags, and emergent structure. The current era favors AI-mediated access to distributed knowledge. Each approach has genuine strengths and genuine weaknesses, and the optimal balance depends on context.

The hardest part is maintenance. Creating a knowledge base is relatively easy. Keeping it accurate, current, and useful over time is extraordinarily hard. Every KM system in history has eventually confronted the problem of knowledge decay — the slow accumulation of outdated, inaccurate, or irrelevant content that gradually erodes user trust and system utility.

These themes will recur throughout the rest of this book, in contexts ranging from personal note-taking to enterprise AI systems. Knowing that they are perennial — that they afflicted Alexandrian librarians as surely as they afflict modern knowledge engineers — is not quite the same as knowing how to address them. But it is a start.