Jan 7, 2026

Favorite Problems I’m Thinking About

Ideas

Photo by Brandon Lopez on Unsplash
Photo by Brandon Lopez on Unsplash

Favorite Problems I’m Thinking About

Last updated: January 2026

Richard Feynman once said that if you really want to do good work, you should keep a small set of favorite problems in your head at all times.

The idea wasn’t to solve them quickly. It was the opposite.

You carry a handful of problems with you for years. You filter new information through them. Papers you read, tools you try, products you build, conversations you overhear. all of it quietly snaps into place against those problems. Over time, insight compounds.

Feynman put it bluntly:

“You should keep a dozen of your favorite problems constantly present in your mind… Every time you hear or read a new trick or new result, you see if it helps.”

I don’t keep an exact list of twelve, and I don’t pretend these are my problems in any proprietary sense. They’re simply the questions I keep circling back to across my writing, my work in experimentation, and the SaaS products I build, ship, and scale.

What follows isn’t a roadmap. It’s a working set of intellectual tensions.

Lean Experimentation in Imperfect Environments

Most experimentation theory assumes conditions that don’t exist.

  • Large sample sizes

  • Stable traffic

  • Clean instrumentation

  • Statistical literacy across stakeholders

  • Time to wait

Startups don’t have these. Early-stage teams don’t have these. Even large organizations rarely have these in practice, despite what the dashboards say.

So the problem I keep coming back to is this:

How do you build experimentation systems that work when the data is messy, the samples are small, and the decisions still need to be made?

I’m especially interested in experimentation that is:

  • Directional, not academically pure

  • Designed for reversible decisions

  • Fast enough to support real growth constraints

  • Honest about uncertainty instead of hiding it behind p-values

Many teams try to solve this by buying or building massive CDPs and “unified” experimentation platforms. In theory, these systems promise leverage. In practice, poorly implemented data platforms are responsible for some of the most expensive enterprise failures I’ve seen.

The irony is that teams often need less infrastructure, not more if the system is designed correctly.

Self-Serving Experimentation Architecture

I’ve been teaching myself how to build SaaS products largely because experimentation tools themselves have a problem:

They assume too much expertise from the user.

Most experimentation platforms still require users to understand:

  • Statistics

  • Test design tradeoffs

  • Power calculations

  • Metric selection

  • Instrumentation caveats

That’s fine for statisticians. It’s not fine for founders, PMs, or early growth teams trying to move quickly.

So a recurring question for me is:

What does truly self-serving experimentation look like?

Not “self-serve” in the marketing sense, but in the operational sense:

  • The system guides decisions instead of asking for them

  • Defaults are opinionated and defensible

  • Statistical methods are embedded, not exposed

  • Users get useful answers, not just “significant / not significant”

This becomes even more important in low-traffic environments, where waiting for perfect certainty is often more costly than making a reversible mistake.

Communicating Statistical Uncertainty Without Teaching Statistics

There’s a growing body of research suggesting that traditional null-hypothesis testing is a poor fit for digital experimentation especially when speed of learning matters more than formal inference.

Yet most stakeholders were taught statistics once, badly, years ago.

So another problem I care about is:

How do you communicate statistical results in a way that supports good decisions, without forcing everyone to become a statistician?

This includes:

  • When Bayesian approaches help and when they confuse

  • How to express uncertainty honestly without freezing action

  • How to avoid false confidence from “clean-looking” numbers

  • How to design outputs that match how humans actually reason under pressure

For fast-moving teams, learning speed is a competitive advantage. Time lost to over-rigid testing policies can be existential.

Decision-Making With Imperfect Data

This thread runs through almost everything I write.

On Experimentation Career, Growth Strategy Lab, and in the books and papers I keep returning to, the core question is always the same:

How do you make better decisions when the data is incomplete, noisy, and time-bounded?

That applies to:

  • Product decisions

  • Growth bets

  • Career moves

  • Personal life choices

Stress, incentives, and uncertainty distort judgment. Systems either account for that—or they quietly amplify bad decisions.

Experimentation, at its best, is a way of thinking. A way to externalize uncertainty, reduce ego, and learn faster than intuition alone allows.

When to Stop Exploring

One book that comes up again and again in my thinking is The Algorithmic Life (also known as Algorithms to Live By).

It asks a deceptively simple question:

At what point does continued experimentation become costly?

Exploration has a price. So does exploitation.

In business, this shows up as:

  • Over-testing obvious wins

  • Endless optimization without shipping

  • Mistaking learning for progress

In life, it shows up as:

  • Never committing

  • Always searching

  • Confusing optionality with freedom

So one of my quieter favorite problems is understanding when to stop experimenting—and how to recognize when it’s time to enjoy the returns of what you’ve already learned.

Why I Write About This

None of these problems are fully solvable. That’s the point.

They’re the kinds of problems that stay interesting as tools change, markets shift, and your own incentives evolve. Writing is how I test my thinking. Building products is how I pressure-test it. Teaching myself to ship software is how I stay honest about constraints.

If any of these questions overlap with what you’re wrestling with professionally or personally I’m always open to conversations that sharpen the problem rather than rush the answer.

Your growth deserves decisions backed by science.

2025©All rights reserved.

Your growth deserves decisions backed by science.

2025©All rights reserved.

Your growth deserves decisions backed by science.

2025©All rights reserved.