Fake Door Tests: When and How to Test Demand Before Building Anything

By Rasp Team

You're about to spend 3 months building a feature.

Engineering wants to know if anyone will use it.

You say: "Our customers asked for it!"

Engineering says: "Did they say they'd use it, or will they actually use it?"

There's a difference.

People say they want things they'll never use. Surveys lie. Feature requests mislead. Good intentions don't predict behavior.

The solution: Test demand before you build.

Fake door tests (also called "painted door" or "smoke tests") let you measure interest with near-zero engineering effort.

You show users a button, link, or menu item for a feature that doesn't exist yet. If they click, you've validated demand. If they don't, you saved months of wasted work.

But: Fake door tests can go wrong. Do them badly and you frustrate users, damage trust, and get misleading data.

Here's how to do them right.


What Is a Fake Door Test?

The concept: You present a feature as if it exists, but it doesn't — yet.

Examples:

  • Menu item that leads to "coming soon" page
  • Button that opens "we're working on this" modal
  • Landing page for feature you haven't built
  • Email signup for beta access

What you measure:

  • How many people click/try to use it
  • Which segments show interest
  • Whether interest translates to intent

What you learn:

  • Is there real demand?
  • Who wants it most?
  • Is it worth building?

The ethics: Always tell users it doesn't exist yet. Never let them think they did something wrong.


When to Use Fake Door Tests

Good Use Cases

1. Expensive features with uncertain demand

  • Feature would take 3+ months
  • Unclear if enough users want it
  • High opportunity cost

Example: Building a mobile app when you're web-only

Fake door: Add "Download Mobile App" to navigation → see clicks

2. Features customers request but might not use

  • People say they want it
  • Not sure if they'd actually use it
  • Want behavioral data, not stated preference

Example: "We want dark mode!"

Fake door: Add dark mode toggle → measure clicks

3. New product directions

  • Considering expansion into new area
  • Want to validate before committing
  • Need to prioritize across big bets

Example: Adding AI features

Fake door: "AI-Powered Insights" button → gauge interest

4. Differentiating between segments

  • Multiple user types
  • Unclear which segment wants feature
  • Need to prioritize for one segment

Example: Enterprise vs. SMB feature

Fake door: Different messaging for each → compare click rates

Bad Use Cases

1. Core functionality

Don't fake door essential features. Build them.

2. Features you're already building

No point in testing if decision is already made.

3. Safety or compliance features

Never fake door security, privacy, or required functionality.

4. One-off edge cases

If you're not planning to build it broadly, don't test it.


How to Run a Fake Door Test (Step by Step)

Step 1: Define What You're Testing

Be specific:

  • What feature exactly?
  • Which user segment?
  • What does success look like?

Bad hypothesis: "People want better reporting."

Good hypothesis: "Enterprise users will engage with automated insights if we surface them in the dashboard. We'll validate this if 20%+ of enterprise users click within 2 weeks."

Step 2: Choose Your Implementation

Option A: In-Product Fake Door

Add UI element (button, menu item, card) that looks real.

When to use:

  • Testing with existing users
  • Feature would live in current product
  • Need behavioral data

Example:

Add "Export to Excel" button → tracks clicks → shows modal: "We're building this! Join the waitlist."

Option B: Landing Page Test

Create webpage describing feature. Drive traffic. Measure signups.

When to use:

  • Testing with new audience
  • Validating new product/service
  • Need to explain complex value prop

Example:

Create landing page for "AI Writing Assistant" → run ads → measure email signups

Option C: Email/Announcement Test

Send email about new feature. Include CTA. Measure clicks.

When to use:

  • Testing with specific segment
  • Feature needs explanation
  • Want immediate feedback

Example:

Email enterprise customers: "New SSO integration available" → track clicks → reveal "Coming Q2"

Option D: Pricing Page Test

Add feature to higher pricing tier. Measure upgrade attempts.

When to use:

  • Testing willingness to pay
  • Validating premium feature
  • Need revenue signal

Example:

Add "Advanced Analytics" to Enterprise tier → track upgrade clicks

Step 3: Design the "Not Yet" Experience

This is critical. When users click, what happens?

Bad experience:

  • 404 error
  • Generic "Coming Soon"
  • No context
  • No next step

Good experience:

[Modal or page]

We're working on [Feature Name]!

[Brief description of what it will do]

Want early access?
[Email signup form]

[Thank you! We'll notify you when it's ready.]

Or: [Go back] [Learn more]

Key elements:

  1. Acknowledge intent: "You tried to use [feature]"
  2. Explain status: "We're building this"
  3. Offer next step: Signup, survey, or alternative
  4. Set expectations: When it might be available
  5. Provide escape: Easy way back

Example:

📊 Advanced Reporting is Coming!

We're building advanced reporting with custom dashboards, 
automated insights, and export to Excel.

Want to be first to try it?
[Email: ___________] [Notify Me]

Expected launch: Q2 2026

[✕ Close]  [Tell us what you'd want to see in reporting]

Step 4: Set Success Criteria

Before launching, define:

  • Minimum interest threshold: "Need 15% click rate to proceed"
  • Segment requirements: "Need interest from enterprise segment specifically"
  • Time period: "Will test for 2 weeks"
  • Sample size: "Need 100+ impressions minimum"

Example criteria:

  • If >20% of enterprise users click in 2 weeks → Build it
  • If 10-20% click → Build MVP version
  • If <10% click → Don't build, or significantly descope

Step 5: Implement Tracking

Track:

  • Impressions: How many users saw the fake door
  • Clicks: How many clicked
  • Segment: Which user types clicked
  • Signups: How many joined waitlist (if applicable)
  • Behavior: What they did before/after clicking

Tools:

  • Google Analytics events
  • Mixpanel
  • Amplitude
  • Custom event tracking

Example tracking:

// When fake door shown
analytics.track('Fake Door Shown', {
  feature: 'Advanced Reporting',
  user_segment: 'enterprise',
  location: 'dashboard'
});

// When clicked
analytics.track('Fake Door Clicked', {
  feature: 'Advanced Reporting',
  user_segment: 'enterprise'
});

// When email submitted
analytics.track('Fake Door Signup', {
  feature: 'Advanced Reporting',
  user_segment: 'enterprise',
  email: email
});

Step 6: Launch and Monitor

Timeline:

  • Week 1: Launch to 10% of users (test implementation)
  • Week 2: Launch to 100% (full test)
  • Week 3: Analyze results
  • Week 4: Decision

Monitor daily:

  • Click rate by segment
  • User feedback (support tickets, emails)
  • Any bugs or UX issues

Red flags:

  • Users frustrated or confused
  • Negative sentiment in feedback
  • Technical issues skewing data

Step 7: Analyze and Decide

Calculate:

Click-through rate = Clicks / Impressions
Signup rate = Signups / Clicks
Interest rate by segment = Clicks per segment / Total users in segment

Segment the data:

  • Which user types clicked most?
  • Which industries/company sizes?
  • New users vs. power users?
  • Free vs. paid?

Compare to benchmarks:

  • Typical in-app CTA: 5-10% click rate
  • Strong interest: 15%+ click rate
  • Very strong: 25%+ click rate

Decision matrix:

High clicks + High signups = Build it
High clicks + Low signups = Needs better positioning
Low clicks + High signups = Discoverability issue
Low clicks + Low signups = Don't build

Step 8: Close the Loop

After the test:

To users who clicked:

Subject: Update on [Feature Name]

Thanks for your interest in [Feature]!

Based on feedback from you and others, we're [building it / not 
building it right now / building a different version].

[If building]: Here's what to expect and when.
[If not building]: Here's why and what we're doing instead.

Your input shaped this decision. Thank you.

To the team:

  • Share results
  • Explain decision
  • Update roadmap

Real-World Examples

Example 1: Dropbox's Video

What they tested: Would people want cloud file storage?

Fake door: Landing page with video explaining Dropbox + signup

Results: 75,000 signups overnight

Decision: Build it (became billion-dollar company)

Key insight: Video explained value prop. Signups proved demand.

Example 2: Buffer's Pricing Test

What they tested: Would people pay for scheduled social media posts?

Fake door: Landing page → pricing page → "We're not quite ready yet" + email signup

Results: Hundreds of signups + willingness to see pricing

Decision: Build it (validated paid model before writing code)

Key insight: Testing pricing page showed willingness to pay, not just interest.

Example 3: Zapier's Integration Tests

What they tested: Which integrations to build first?

Fake door: Listed many integrations as "coming soon" → tracked clicks

Results: Clear ranking of most-wanted integrations

Decision: Built top-clicked integrations first

Key insight: Let users vote with behavior, not surveys.

Example 4: Amazon's "Customers who bought this also bought"

What they tested: Would recommendations drive sales?

Fake door: Showed recommendations but didn't track initial clicks systematically

Results: Eventually measured significant lift in cross-selling

Decision: Made it core feature (now drives 35% of Amazon sales)

Key insight: Small test of user behavior revealed massive opportunity.


Fake Door Test Templates

Template 1: In-App Modal

[Feature Icon]

[Feature Name] is coming soon!

[One sentence describing what it does]

We're building this based on requests from users like you.

Want to be first to try it?

[Email input field]
[Notify me when it's ready]

Expected: [Timeframe]

[Maybe later]  [Tell us what you'd want in this feature →]

Template 2: Landing Page

[Hero Section]
Headline: [Feature Value Prop]
Subheadline: [What it does and who it's for]
[Compelling image/mockup]
[Join Waitlist button]

[Feature Benefits]
• Benefit 1
• Benefit 2  
• Benefit 3

[How It Works]
Step 1: [Description]
Step 2: [Description]
Step 3: [Description]

[Social Proof]
"[Quote from beta tester or made-up persona representing target user]"

[CTA]
Be the first to get [Feature]
[Email signup]

[Footer]
Expected launch: [Quarter/Month]
Questions? [Email]

Template 3: Follow-up Survey

After user clicks fake door:

Thanks for your interest in [Feature]!

Help us build it right:

1. What would you use [Feature] for?
   [Text field]

2. How often would you use this?
   ○ Daily
   ○ Weekly
   ○ Monthly
   ○ Rarely

3. What's most important to you?
   □ [Key aspect 1]
   □ [Key aspect 2]
   □ [Key aspect 3]

4. Would you pay extra for this?
   ○ Yes, as add-on
   ○ Yes, as premium tier
   ○ Only if included
   ○ Not sure

[Submit]

Common Mistakes (And How to Avoid Them)

Mistake #1: No Clear "This Doesn't Exist Yet" Message

Problem: Users think feature is broken.

Result: Support tickets, frustration, lost trust.

Fix: Always explicitly say "We're building this" or "Coming soon."

Mistake #2: Testing Without Clear Success Criteria

Problem: You get data but don't know what it means.

Example: "We got 500 clicks!" (Out of how many users? Is that good?)

Fix: Set threshold before testing: "Need 20% click rate to validate."

Mistake #3: Testing Too Many Things at Once

Problem: Can't isolate what drove interest.

Example: Testing 5 fake doors simultaneously.

Fix: Test one at a time. Or A/B test variations of one feature.

Mistake #4: Leaving Fake Doors Up Forever

Problem: Erodes trust. Users see "coming soon" for months.

Result: "They never ship anything."

Fix: Set test window. Remove after 2-4 weeks. Update users.

Mistake #5: Ignoring Negative Signals

Problem: Interpret any clicks as validation.

Example: "We got 50 clicks!" (Out of 10,000 impressions = 0.5% = not interested)

Fix: Compare to benchmarks. Low click rate = low interest.

Mistake #6: Not Segmenting Results

Problem: Aggregate data hides segment-specific insights.

Example: 10% overall click rate (but 50% from enterprise, 2% from SMB)

Fix: Always segment by user type, plan, industry, etc.

Mistake #7: Testing UI You Won't Actually Build

Problem: You test a polished UI but would ship something different.

Result: Interest in the presentation, not the feature.

Fix: Make fake door match what you'd actually ship.


The Ethics of Fake Door Testing

The Concerns

1. "Isn't this lying to users?"

Not if you're transparent. Tell them it doesn't exist yet.

2. "Won't users be frustrated?"

Only if you do it badly. Good fake doors set clear expectations.

3. "Will this damage trust?"

Only if you:

  • Never build requested features
  • Leave fake doors up forever
  • Don't communicate

The Principles

1. Always reveal immediately

Users should know within 1 click that feature doesn't exist.

2. Provide value in the reveal

Let users sign up, give feedback, or learn more.

3. Close the loop

Tell users what you decided and why.

4. Don't overuse

One or two fake doors is research. Ten is deceptive.

5. Test to learn, not to stall

Don't use fake doors to avoid building things you should build.

The Test

Ask: "Would I be okay if users knew this was a test?"

If yes: Probably ethical.

If no: Reconsider approach.


Alternatives to Fake Door Tests

If fake doors don't feel right, try:

1. Concierge Test

Manually provide the service before building it.

Example: Manually create reports users want before automating.

2. Wizard of Oz Test

Build just the UI. Do the work manually behind the scenes.

Example: AI feature that's actually humans in the background.

3. Prototype Test

Build clickable prototype. Test with users.

Example: Figma prototype of new workflow.

4. Landing Page + Ads

Drive traffic to landing page. Measure signups.

Example: Create page for new product. Run Google Ads. See conversion.

5. Pre-orders

Sell it before building it.

Example: "Pre-order at 50% off. Ships in 3 months."


The Decision Tree

Should I fake door test this feature?

Is it expensive to build? (3+ months)
├─ No → Just build it
└─ Yes
    └─ Is demand uncertain?
        ├─ No (clear demand) → Just build it
        └─ Yes
            └─ Can I ethically test it?
                ├─ No (core/safety feature) → Just build it
                └─ Yes
                    └─ Do I have traffic to test?
                        ├─ No → Use landing page test
                        └─ Yes → Use fake door test

Your Fake Door Testing Checklist

Before launch:

  • Clear hypothesis and success criteria
  • Ethical "not yet" experience designed
  • Tracking implemented
  • Segment analysis plan
  • Timeline set (test duration)
  • Team aligned on decision thresholds

During test:

  • Monitor clicks daily
  • Watch for user frustration
  • Check data quality
  • Segment results

After test:

  • Analyze results vs. criteria
  • Make go/no-go decision
  • Close loop with users who clicked
  • Remove fake door
  • Document learnings

Final Thought

The worst product decisions come from building things nobody wants.

The best way to avoid that? Test before building.

Fake door tests aren't about tricking users.

They're about respecting their time — and yours.

A 2-week test saves 3 months of wasted engineering.

But only if you do it right: transparently, ethically, and with clear criteria.

Start small. Test one feature. Learn from real behavior.

Then build what people actually want.

Not what they say they want.