When the Data Contradicts User Feedback: A PM's Guide to Conflicting Signals

By Rasp Team

Users say your onboarding is confusing.

The data shows 85% completion rate.

Who's right?


Users demand a dark mode.

Usage data shows only 3% would actually use it.

Do you build it?


Five customers swear they'd pay more for enterprise features.

Pricing experiments show elasticity around current tiers.

Do you raise prices?


Welcome to product management: the art of making decisions when nothing is certain and everything contradicts.

Data and feedback should align. When they don't, most PMs either:

  1. Trust data blindly and ignore qualitative feedback
  2. Trust users and dismiss the numbers
  3. Freeze completely, unable to decide

All three are wrong.

Here's how to navigate conflicting signals like an expert.


Why Data and Feedback Conflict

Before solving the problem, understand why it happens.

Reason 1: Stated vs. Revealed Preferences

What users say ≠ What users do

Example:

  • Users say: "I'd use a meal planning feature."
  • Users do: Never open it after first week

Why this happens: People are bad at predicting their own behavior. They tell you what they think they should want, not what they actually use.

Reason 2: Vocal Minority

Loud feedback ≠ Representative feedback

Example:

  • Feedback: 20 users in Slack say feature is broken
  • Data: 5,000 users use it daily without issue

Why this happens: Unhappy users are 10x more likely to complain. Silent majority doesn't give feedback.

Reason 3: Different Time Horizons

Current data ≠ Future behavior

Example:

  • Feedback: "We'd switch to you if you had SSO"
  • Data: Current users don't use SSO

Why this happens: Data shows current state. Feedback reveals potential state.

Reason 4: Selection Bias

Who you talk to ≠ All users

Example:

  • Feedback: Power users want advanced features
  • Data: 80% of users use 20% of features

Why this happens: You tend to interview engaged users. They're not representative.

Reason 5: Measurement Problems

What you measure ≠ What matters

Example:

  • Feedback: "Feature is hard to use"
  • Data: Feature usage is high

Why this happens: You're measuring usage, not satisfaction or efficiency.

Reason 6: Context Missing

Numbers without story ≠ Full picture

Example:

  • Feedback: "Search doesn't work"
  • Data: 60% of searches return results

Why this happens: 60% success rate is terrible for search. Data without benchmark is meaningless.


The Framework: How to Reconcile Conflicts

When data and feedback clash, use this decision tree:

Step 1: Validate Both Signals

Don't dismiss either immediately. Assume both are telling you something true.

For the data:

  • Is the metric right? (Tracking what actually matters)
  • Is the sample representative? (Not biased)
  • Is the timeframe appropriate? (Enough data)
  • Are there confounding factors? (Other changes happening)

For the feedback:

  • Who said this? (Customer segment)
  • How many said it? (Frequency)
  • What's the context? (Specific situation)
  • Are they actually affected? (Or speculating)

Step 2: Look for Hidden Explanations

Often conflicts exist because you're missing information.

Ask:

  • Are they measuring different things?

    • Data: overall usage
    • Feedback: satisfaction with usage
  • Are they measuring different segments?

    • Data: all users
    • Feedback: enterprise customers only
  • Are they measuring different success criteria?

    • Data: completion rate
    • Feedback: time to complete (efficiency)
  • Is there a survivorship bias?

    • Data: from people who stayed
    • Feedback: from people who churned

Step 3: Dig Deeper

When simple explanations don't resolve it, investigate:

For conflicting positive data + negative feedback:

Example: High usage but users complain

Investigate:

  • Are users using it because they have to, not because they want to?
  • Is the feature creating work they then have to resolve?
  • Is usage high but satisfaction low?

Method: Add satisfaction survey right in the feature

For conflicting low data + positive feedback:

Example: Low usage but users love it

Investigate:

  • Is it hard to discover?
  • Is it only valuable for specific use case?
  • Is it replacing external tool?

Method: Interview users who love it. What makes it valuable?

Step 4: Run Experiments to Resolve

When you can't figure it out, test.

A/B test the thing:

  • Data says X, feedback says Y
  • Test both hypotheses
  • Let reality decide

Example:

  • Feedback: "Make the button bigger"
  • Data: Current button has 15% click rate
  • Test: Bigger button vs. current
  • Result: No statistical difference

Conclusion: Button size isn't the problem. Dig deeper.


Common Conflict Scenarios (And How to Resolve Them)

Scenario 1: High Usage, Low Satisfaction

The conflict:

  • Data: Feature used by 60% of users weekly
  • Feedback: "This feature is painful to use"

What's actually happening:

Users are forced to use it (workflow requirement) but hate the experience.

How to resolve:

  1. Measure beyond usage: Add CSAT or effort score
  2. Watch sessions: See how painful it actually is
  3. Calculate hidden cost: Time wasted, workarounds created

Decision framework:

  • If feature is critical but painful → Improve UX
  • If feature is unnecessary but habitual → Remove and replace

Real example: PM saw high usage of manual export feature. Users hated it but had no choice. Built automated export. Usage dropped (good), satisfaction soared.

Scenario 2: Requested Feature, Weak Usage Projection

The conflict:

  • Feedback: "I'd use feature X every day!"
  • Data: Similar features have 5% adoption

What's actually happening:

Users think they want something they won't actually use.

How to resolve:

  1. Ask behavioral questions: "How do you solve this today?"
  2. Check frequency: "When's the last time you needed this?"
  3. Look for workarounds: Are they actively working around the missing feature?

Decision framework:

  • If they have active workarounds → Real need, build it
  • If they can't describe current pain → Hypothetical need, skip it

Real example: Users requested batch editing. PM asked: "Show me the last time you needed to edit multiple items." Most couldn't. Those who could showed manual process. Built for the few with real pain.

Scenario 3: Low Usage of "Critical" Feature

The conflict:

  • Feedback: "This feature is essential!"
  • Data: Only 10% of users touch it

What's actually happening:

Feature is essential to a specific segment, not broadly.

How to resolve:

  1. Segment the data: Who are the 10%?
  2. Check revenue impact: Are they high-value customers?
  3. Assess strategic importance: Does this segment matter?

Decision framework:

  • If 10% = enterprise customers → Keep and enhance
  • If 10% = random low-value users → Consider deprecating

Real example: Advanced reporting used by 8% of users. But those 8% were 60% of revenue. Not only kept it, prioritized improvements.

Scenario 4: Negative Feedback, Positive Metrics

The conflict:

  • Feedback: "Onboarding is confusing"
  • Data: 80% activation rate

What's actually happening:

Users succeed despite confusion, or metric measures wrong thing.

How to resolve:

  1. Check efficiency: Are they succeeding quickly or slowly?
  2. Measure effort: Add CES (Customer Effort Score)
  3. Watch sessions: See the confusion in action

Decision framework:

  • High activation + high effort → Improve UX, preserve success
  • High activation + low effort → Feedback from vocal minority

Real example: Onboarding had 75% completion but users complained. Session replays showed users succeeded but backtracked constantly. Simplified flow. Completion went to 85%, complaints dropped.

Scenario 5: Competitor Feature Requests vs. Usage Data

The conflict:

  • Feedback: "You need [competitor feature] or we'll switch"
  • Data: Competitor users rarely use that feature

What's actually happening:

Feature is table stakes for consideration, not daily usage.

How to resolve:

  1. Talk to competitor users: How often do they actually use it?
  2. Check deal losses: Are we actually losing deals over this?
  3. Assess cost vs. value: What's ROI of building it?

Decision framework:

  • If losing deals → Build minimum version for checkbox
  • If not losing deals → Differentiate elsewhere

Real example: Requests for Gantt charts. Competitor data showed <5% usage. Losing zero deals. Built timeline view instead (simpler, more useful).


The Data-Feedback Matrix

Use this to categorize conflicts:

                    User Feedback
                Positive  |  Negative
            +-------------|-------------+
Data    High|    Win      |  Hidden     |
            |             |  Problem    |
        ----+-------------+-------------+
       Low  | Discovery   |    Kill     |
            |   Issue     |             |
            +-------------+-------------+

Quadrant 1: High Data + Positive Feedback = Win

What it means: Feature is working. Don't touch it.

Action: Celebrate and move on.

Quadrant 2: High Data + Negative Feedback = Hidden Problem

What it means: People use it but hate it. There's pain beneath the surface.

Action: Improve UX, reduce friction, measure satisfaction not just usage.

Quadrant 3: Low Data + Positive Feedback = Discovery Issue

What it means: Great feature that nobody finds or understands.

Action: Improve discoverability, onboarding, or positioning.

Quadrant 4: Low Data + Negative Feedback = Kill

What it means: Nobody uses it and nobody likes it.

Action: Deprecate and move on.


Advanced Techniques

Technique 1: Cohort Analysis

Problem: Overall data hides segment-specific patterns.

Solution: Break data by user segments.

Example:

  • Overall: 50% feature adoption
  • Segmented:
    • Enterprise: 90% adoption (love it)
    • SMB: 20% adoption (ignore it)

Insight: Feature is enterprise winner, SMB distraction.

Technique 2: Time-Series Analysis

Problem: Current snapshot doesn't show trajectory.

Solution: Look at trends over time.

Example:

  • Today: 40% usage (feels low)
  • Trend: Was 10% three months ago (strong growth)

Insight: Feature is gaining traction. Give it time.

Technique 3: Correlation Analysis

Problem: Don't know if feature drives outcomes.

Solution: Check correlation with success metrics.

Example:

  • Feature usage: 30% of users
  • Retention: Users who use it have 80% retention vs. 50% baseline

Insight: Low usage but high impact. Increase adoption.

Technique 4: Inverse Analysis

Problem: Focusing on users who use feature, not who doesn't.

Solution: Study non-users.

Example:

  • Interview users who don't use feature
  • Ask: "Why not? What's missing? Do you need this?"
  • Often find: They don't need it (and that's fine)

Insight: Low usage might be correct. Not everyone needs everything.


Red Flags That You're Misinterpreting Signals

Red Flag 1: Cherry-Picking Data

What it looks like: Only citing data that supports your preferred decision.

Example: "3 enterprise customers want this" (ignoring that 50 SMB customers don't)

Fix: Look at complete picture. Seek disconfirming evidence.

Red Flag 2: Dismissing All Feedback as "Not Representative"

What it looks like: "These are just outliers" for every piece of feedback.

Example: Ignoring all qualitative input because "data is objective"

Fix: If 5+ people say same thing, investigate. There's signal there.

Red Flag 3: Trusting Surveys Over Behavior

What it looks like: Building based on "I would use this" survey responses.

Example: 80% say they'd use feature, but similar features have 10% adoption.

Fix: Weight revealed preference (behavior) over stated preference (surveys).

Red Flag 4: Ignoring Small Segments with High Value

What it looks like: "Only 5% of users want this, so we won't build it."

Example: That 5% represents 60% of revenue.

Fix: Weight by strategic value, not headcount.


Decision Frameworks for Conflicting Signals

Framework 1: Revenue Impact

When data and feedback conflict, follow the money:

  • Which signal represents more revenue?
  • Which reduces churn of high-value customers?
  • Which unlocks expansion?

Framework 2: Strategic Alignment

When data and feedback conflict, follow strategy:

  • Which signal aligns with where you're headed?
  • Which supports your ICP?
  • Which builds the future you want?

Framework 3: Effort vs. Impact

When data and feedback conflict, calculate ROI:

  • Test the feedback hypothesis if it's cheap
  • Trust the data if change is expensive
  • Do both if you can afford it

Framework 4: Risk Assessment

When data and feedback conflict, assess downside:

  • What's worst case if feedback is right and you ignore it?
  • What's worst case if data is right and you follow feedback?
  • Choose lower-risk option.

Real-World Example: Putting It All Together

Scenario:

Users complain that search is broken. Data shows 75% search success rate.

Step 1: Validate signals

  • Feedback: 30 support tickets in 60 days
  • Data: 75% of searches return results (seems okay?)

Step 2: Look for hidden explanations

  • 75% success for search is actually terrible (Google is 99%+)
  • Support tickets are from power users (high-value segment)

Step 3: Dig deeper

  • Watch session replays: Users try 3-4 searches to find what they need
  • Interview complainers: They're searching for things that should exist
  • Check: Search only indexes titles, not content

Step 4: Synthesize

  • Data is technically correct: 75% searches return something
  • Feedback is behaviorally correct: Search requires too many attempts
  • Root cause: Search quality bar is wrong

Decision:

Expand search to full-text (not just titles). Success rate went to 92%. Support tickets dropped 80%.

Lesson: Both signals were right. Data measured wrong thing.


Final Thought

Data and feedback aren't enemies.

They're complementary lenses on the same reality.

When they conflict, don't pick sides.

Dig deeper.

The conflict is usually telling you something important that neither signal reveals alone.

Start investigating, not deciding.

The answer is in the contradiction.