CRO Knowledge Management: Building Institutional Memory Against the Knowledge Half-Life
TL;DR: Every experimentation program has a Knowledge Half-Life — the speed at which insights from past tests become inaccessible. Most teams operate at a 6-month half-life without knowing it, which means 75% of learning is effectively gone within 18 months. Here's how to slow the decay.
Key Takeaways
- Institutional knowledge in experimentation programs decays on a measurable half-life, typically 4-8 months without deliberate preservation practices
- The Knowledge Half-Life framework quantifies the rate at which past insights become inaccessible — and the compounding cost of operating at short half-lives
- Turnover accelerates knowledge decay non-linearly; losing one experienced practitioner often costs 3-6 months of accumulated context, not just the individual's tests
- Explicit knowledge (hypotheses, results, metrics) is easier to preserve than tacit knowledge (why we chose this metric, what surprised us, what the second-order effect was)
- The teams with the longest half-lives invest in three practices: structured capture, forced retrieval in new-test design, and periodic meta-analysis
Knowledge Doesn't Persist — It Decays
Every experiment generates insight. The question is how long that insight remains accessible to inform future decisions.
In most experimentation programs, the answer is: not long. The hypothesis, the result, and the one-line summary survive as long as the spreadsheet or Notion page exists. But the context — why this variant won, what surprised us, what the second-order effect was on retention — decays within weeks. And the tacit knowledge (the judgment that shapes what hypotheses get proposed next) decays even faster when the person who held it leaves.
This is the endowment effect in reverse. Teams undervalue the institutional memory they already have because it feels automatic — until it's gone. Losing an experienced practitioner doesn't just cost their future output; it costs the aggregated judgment built from hundreds of past tests they can no longer retrieve for the team.
The business economics framing is that knowledge is a depreciating asset. Gary Becker's human capital theory established that skills and context decay without active maintenance. Experimentation knowledge is a specific instance: without structured capture and retrieval, the asset depreciates on a predictable timeline.
"A company's analytics lives in one person's head — the definitions, the naming conventions, the setup logic. When they leave, all of it leaves with them." — Atticus Li
The Knowledge Half-Life Framework
Define Knowledge Half-Life (KHL) as the time elapsed before 50% of a program's past insights are no longer accessible or applied.
Accessible means: a current team member can find the insight when designing a related experiment. Applied means: the insight actually informed a decision in the last quarter.
KHL = Time at which (insights still applied / total insights generated) = 0.5
Interpretation thresholds:
- KHL of 18+ months — Strong preservation. Most insights remain accessible and regularly applied. Typical of teams with mature archives, forced retrieval in design, and low turnover.
- KHL of 9-18 months — Solid discipline. Most teams with intentional knowledge management land here.
- KHL of 4-9 months — Typical unprotected state. Insights from two years ago are mostly gone. Insights from six months ago are partial.
- KHL below 4 months — Crisis state. The program is effectively building on short-term memory only. Usually coincides with high turnover or minimal archive discipline.
The threshold matters because compounding is brutal at short half-lives. A program at 6-month KHL operating for 3 years has perhaps 12% of its three-year insight base still applied. A program at 18-month KHL operating for the same 3 years has roughly 40% still applied — more than 3x the retained learning.
What Causes Knowledge Decay
Three mechanisms dominate:
Turnover. When a practitioner leaves, they take tacit context with them. The explicit record (what test, what result) often survives. The "why we chose this metric," "what the qualitative signal was before we ran this," and "what we tried that didn't make it into the final hypothesis" usually doesn't. Losing one senior practitioner typically costs the equivalent of 3-6 months of accumulated program context.
Format decay. Results documented in chat threads become unsearchable within weeks. Results documented in ephemeral docs become unfindable within months. Only structured, queryable archives survive beyond a year in useful form.
Retrieval atrophy. Knowledge that isn't retrieved doesn't stay accessible. If no one has looked at a past test in six months, finding it again takes longer than it would have six months ago — and finding it gets harder over time. Forced retrieval during new-test design is the single strongest counter to this.
How to Measure KHL
The direct measurement is a pain, but a proxy audit works:
Step 1 — Pull the last 10 new experiment hypotheses written.
Step 2 — For each, find the oldest past experiment referenced in the hypothesis or design.
Step 3 — Calculate the age of that referenced experiment for each new test.
Step 4 — The median age is a rough proxy for KHL.
Teams doing this audit typically find median reference ages between 2-6 months, implying KHL in that range. The goal is to extend this median over time through better capture and retrieval.
Preservation Practices That Extend KHL
Structured capture at test launch, not at end. Documenting hypothesis, expected outcome, and reasoning at launch (not after results are in) captures the context before hindsight bias reshapes it. This is the single highest-leverage discipline.
Forced retrieval in new-test design. The intake template for new tests should include a required field: "What past experiments did you review before writing this hypothesis?" This converts retrieval from optional to default, and it surfaces relevant archives automatically.
Tagging with a controlled vocabulary. Free-text tags decay into chaos within a year. A fixed vocabulary (feature area, funnel stage, hypothesis type, outcome) makes retrieval deterministic.
Documented decision traces. When the team ships variant B over variant A, capture why. This is the tacit knowledge that usually walks out the door with turnover.
Quarterly meta-analysis. Looking across past tests produces patterns that no individual test produces. This practice both creates new insight and strengthens the retrieval loop (because active review surfaces tests that otherwise decay).
Handoff protocols for practitioner departures. When someone leaves, structured 1-on-1 sessions to extract tacit knowledge about their past work. Usually 4-8 hours of targeted conversation captures most of what would otherwise be lost.
Common Mistakes in Knowledge Management
Confusing storage with knowledge management. A shared Google Drive full of test reports is not knowledge management. It's storage. Knowledge management is about retrieval and application, not just persistence.
Documenting wins only. Failures contain the highest learning density — they tell you where your hypotheses are systematically wrong. Teams that document only wins build archives that teach them nothing about their blind spots.
Treating it as a one-time project. Knowledge management is maintenance, not migration. Teams that build an archive and then stop updating it see it decay within two quarters.
Over-investing in format, under-investing in retrieval. A beautiful template that nobody uses during new-test design is worthless. The retrieval loop matters more than the capture format.
Advanced: Organizational Design Implications
Long-KHL teams tend to share structural patterns worth noting:
They have a dedicated owner of the archive. Not a full-time role, usually a senior practitioner with 10-20% of their time dedicated to archive hygiene, retrieval support, and quarterly meta-analysis.
They treat the archive as a product. It has users (the team), a user experience (search, retrieval, intake), and a backlog of improvements. Treating it casually produces casual results.
They design against turnover. Critical context is documented in the archive, not held in individual heads. This doesn't eliminate turnover cost, but it reduces the magnitude of each departure.
They run meta-analysis publicly. Quarterly meta-analysis sessions where patterns across past tests are discussed build collective memory and identify retrieval gaps.
Frequently Asked Questions
How much time should we spend on knowledge management?
For a team running 30+ tests per quarter, roughly 10-15% of practitioner time on capture, retrieval discipline, and archive maintenance. This ratio compounds: teams that underinvest here typically have KHL below 6 months.
What's the single most important practice to adopt?
Forced retrieval during new-test design. Requiring the intake template to reference past tests converts retrieval from optional to default. This shifts KHL more than any other single practice.
How do we handle tacit knowledge from departing practitioners?
Structured exit sessions, 4-8 hours focused on extracting "what would you want the team to know that isn't already documented?" Capture in the archive under decision traces and surprising findings categories.
What if we have no archive at all?
Start with the last 20 tests. Document them in a simple template. Don't try to backfill everything — focus on making the next 50 tests well-documented. The archive will compound from there.
Can AI tools help with retrieval?
Yes, increasingly. Semantic search over archive text can surface relevant past tests even when tags don't match exactly. This is an emerging area worth investing in once the basic archive is solid, though it's not a substitute for structured capture.
Methodology note: Knowledge Half-Life framework and threshold patterns reflect experience across experimentation programs with varying retention practices. Specific figures are presented as ranges. Human capital depreciation theory draws on Becker's foundational work.
---
Past experiments compound when they're genuinely searchable. Browse the GrowthLayer test library — real experiments organized by funnel stage, hypothesis type, and behavioral pattern.
Related reading: