Ask users what they want and they will tell you with confidence. Build what they asked for and watch them not use it. This pattern repeats so frequently in product development that it has become a dark joke among product managers: users are the worst source of information about what users want.

The explanation for this paradox lies not in user dishonesty but in a well-documented cognitive bias: the Dunning-Kruger Effect. Originally described in the context of competence assessment, this effect has profound implications for user research methodology, survey design, and the interpretation of qualitative data. Understanding it changes how you gather, weight, and act on user feedback.

The Competence-Confidence Gap

David Dunning and Justin Kruger's original 1999 paper described a dual burden: people who lack competence in a domain also lack the metacognitive ability to recognize their incompetence. The result is a systematic miscalibration between actual ability and perceived ability. Those with the least competence overestimate their ability the most, while experts slightly underestimate theirs.

In user research, this manifests in a specific and costly way. When you ask users about their behavior, preferences, or decision processes, you are not asking them to report facts. You are asking them to perform self-assessment, and self-assessment is precisely the domain where the Dunning-Kruger Effect operates. Users lack the metacognitive tools to accurately observe and report their own cognitive processes, but they lack the awareness to know this, so they report with confidence.

The confidence is the trap. When a user says "I would definitely pay for feature X" with clear conviction, it feels like reliable data. But the conviction reflects the user's inability to accurately predict their future behavior, not the accuracy of the prediction itself. Studies consistently show that stated purchase intent correlates with actual purchase behavior at rates between 20 and 40 percent for most product categories.

Stated Preferences Versus Revealed Preferences

Economics distinguishes between stated preferences, what people say they want, and revealed preferences, what people actually choose when spending real resources such as time, money, or attention. The gap between the two is the Dunning-Kruger Effect applied to self-knowledge.

Users overstate their willingness to pay. They overstate their willingness to change behavior. They overstate their engagement with features they currently do not use. They understate their attachment to familiar patterns, their sensitivity to friction, and their tendency to default to the easiest option.

This divergence is not random. It follows predictable patterns that can be accounted for in research design:

The aspiration bias. Users report what they aspire to do rather than what they actually do. They say they want advanced analytics because they aspire to be data-driven, but their actual behavior reveals that they rarely look beyond the dashboard summary.

The social desirability bias. Users report preferences that make them look competent, sophisticated, or responsible. They say they would use security features because security consciousness is socially valued, but their actual behavior shows they skip two-factor authentication when given the option.

The availability bias. Users can only report on features and experiences they can imagine. When asked what they want, they describe variations of what already exists because they cannot envision solutions to problems they have not yet recognized.

The rationalization bias. Users construct logical-sounding explanations for decisions that were actually driven by emotion, habit, or heuristic. When asked why they chose one product over another, they cite features and specifications, but their actual decision was driven by visual appeal, familiarity, or the recommendation of a friend.

The Research Method Hierarchy

Not all research methods are equally susceptible to the Dunning-Kruger Effect. Understanding the vulnerability of each method helps you design research programs that produce actionable insight rather than misleading conviction.

Highest Vulnerability: Self-Report Surveys

Surveys asking users about their preferences, intentions, or hypothetical behavior are the most vulnerable to the Dunning-Kruger Effect. Users cannot accurately predict their own behavior, and the structured format of surveys creates an illusion of precision that masks this inaccuracy. A survey result showing that 73 percent of users want feature X feels quantitative and reliable. In reality, it may reflect what 73 percent of users think they should want rather than what they would actually use.

Moderate Vulnerability: Interviews and Focus Groups

Qualitative methods allow for follow-up questions that can probe beneath surface-level responses, but they introduce social dynamics that amplify the Dunning-Kruger Effect. Users are more likely to present themselves as competent and thoughtful in face-to-face settings. Focus groups add the additional distortion of social conformity, where participants adjust their responses to align with the group consensus.

Lower Vulnerability: Observational Studies

Watching users interact with a product in real or simulated environments reveals behavior that self-report methods miss. Users cannot misreport behavior that is directly observed. However, the observation itself can alter behavior, and the researcher must distinguish between normal behavior and performance behavior.

Lowest Vulnerability: Behavioral Data Analysis

Analyzing actual usage data, click patterns, conversion funnels, and engagement metrics bypasses the Dunning-Kruger Effect entirely because it measures behavior without asking users to report or predict it. This is the gold standard for understanding what users actually do. The limitation is that it cannot tell you why they do it or what they might do in response to changes that have not yet been made.

Designing Research That Accounts for the Bias

The solution is not to abandon self-report methods but to use them in ways that minimize the Dunning-Kruger Effect and to triangulate findings with behavioral data.

Ask about the past, not the future. Instead of asking "Would you use feature X?" ask "Tell me about the last time you encountered this problem." Past behavior is a more reliable predictor of future behavior than stated intention, and users can report past behavior with reasonable accuracy when the question is specific and recent.

Ask about behavior, not preferences. Instead of asking "Which feature is most important to you?" ask "Walk me through what you did yesterday in the product." Behavioral questions reveal actual priorities through time allocation rather than through self-assessment.

Use forced trade-offs. Instead of asking users to rate features independently, which allows everything to be rated as important, force trade-offs: "If you could have only one of these three features, which would you choose?" Trade-off designs bypass the tendency to overrate features by requiring users to reveal their actual priorities through choice.

Triangulate with behavioral data. For every qualitative finding, look for behavioral confirmation. If users say they want better search functionality, check whether they actually use the current search functionality. If usage is low, the stated preference may reflect aspiration rather than genuine need.

Observe the gap as data. When stated preferences and revealed preferences diverge, that divergence is itself valuable data. It tells you something about user aspirations, mental models, and self-image that can inform product positioning and marketing even if it should not drive feature development.

The Expertise Paradox in Power Users

The Dunning-Kruger Effect has a lesser-known second component: experts tend to slightly underestimate their competence. In user research, this means that power users and expert users are often more accurate in their self-assessments but less confident in their recommendations. They are more likely to say "I'm not sure if others would use this" and more likely to be right about what they personally need.

This creates a systematic bias in user research: the most confident voices in your research are often the least reliable, while the most tentative voices are often the most insightful. Research programs that weight feedback by confidence are inadvertently amplifying the least accurate data. Teams that develop mechanisms to surface and weight expert-user feedback, despite its lower confidence, often produce more accurate product insights.

The Uncomfortable Truth About User-Centered Design

The Dunning-Kruger Effect does not invalidate user research. But it demands that we approach user feedback with the same scientific rigor we would apply to any data source with known systematic biases. We do not take self-reported dietary data at face value in nutritional research. We should not take self-reported preferences at face value in product research.

The discomfort lies in the implication: if users cannot accurately report what they want, then "user-centered design" requires going beyond what users say and understanding what users do, what they struggle with, and what they gravitate toward when no one is asking them to self-assess. The best user research does not ask users to be experts on themselves. It observes users being themselves and draws its own conclusions.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Atticus Li

Experimentation and growth leader. Builds AI-powered tools, runs conversion programs, and writes about economics, behavioral science, and shipping faster.