Across 200+ early-stage products, we've seen a pattern: founders waste weeks designing screens nobody uses in early tests. Here are the only 5 MVP design screens that actually matter for validation — and why most MVPs are over-designed, not under-designed.
The 5-Screen MVP: Which Interfaces Actually Matter for Your First User Test
Here's what happens in 80% of MVP projects we see: a founder comes to us with a Figma file containing 30+ screens. Onboarding flows with 8 steps. Settings pages with tabs. Profile customization. Dark mode.
None of it has been tested with a single user.
The irony? Most MVPs fail not because they're too minimal, but because they're too complete. Founders design features users never asked for, screens that never get clicked, and flows that solve problems that don't exist yet.
After designing 200+ early-stage products — including MVPs for YC companies that went on to raise Series A+ — we've identified a pattern. There are exactly 5 screens that matter for your first user test. Everything else is either premature optimization or procrastination disguised as preparation.
The Core Principle: Test Value, Not Features
Your MVP exists to answer one question: does this solve a painful enough problem that someone will change their behavior? That requires showing value, capturing intent, and creating a feedback loop. That's it.
Most founders confuse "minimum viable product" with "feature-complete product that looks minimal." They strip out advanced features but keep the full interface architecture — login flows, dashboards, settings, help docs. The screen count stays high even as the features shrink.
Wrong approach. Your first user test doesn't need a complete product. It needs proof that your core value proposition resonates. Here's what that actually looks like:
Screen 1: The Value Prop Hero
This is your "what you get in 8 seconds" screen. When a test user lands here, they should immediately understand: what this does, who it's for, and why they should care.
Not "welcome to our platform." Not your company origin story. Your core promise, stated clearly enough that someone interrupting their day to test your MVP knows whether to keep going.
What to include:
- One headline that states the outcome ("Turn support tickets into product insights" not "AI-powered analytics platform")
- One sub-headline that adds specificity ("Automatically categorize and prioritize feature requests from Zendesk, Intercom, or plain text")
- One primary CTA that reflects your test goal ("Analyze your tickets" if testing activation, "Join waitlist" if testing demand)
- Social proof IF you have it (YC badge, early customer logos, or nothing — don't fake it)
What to skip: Navigation menus, footer links, pricing toggles, feature comparison tables. Your test user isn't evaluating your entire product suite. They're deciding whether to engage at all.
We've run tests where this screen alone — no signup, no product access — got 40%+ click-through to the next step. When the value prop is clear, users self-select. When it's vague, they bounce, and no amount of beautiful onboarding will save you.
Screen 2: The Activation Moment
This is where users experience your core value for the first time. Not a tutorial. Not a demo video. The actual thing.
For a productivity tool: the moment they check off their first task. For a developer tool: the moment they see data flowing through their first API call. For an AI product: the moment they get their first generated output.
In our experience with AI startups, this screen is where most MVPs fail. Founders build elaborate prompt interfaces, settings panels, and output formatters before validating that the core generation quality is even good enough. Users hit the activation screen, get mediocre output, and never come back.
Design this screen to minimize time-to-value:
- Pre-populate example data or a sample input so users can see results in under 30 seconds
- Remove every optional field and every dropdown that isn't critical to core functionality
- Show the output/result ABOVE the fold if possible
- Include one, maybe two, interaction points maximum
One of our YC companies initially had a 6-field form before showing results. We cut it to one field with smart defaults. Time to first value went from 4 minutes to 22 seconds. Activation rate jumped from 31% to 67%.
The rule: if a user can't experience your core value in under 60 seconds on this screen, you're testing the wrong thing.
Screen 3: The Empty State
This is the screen users see after they've activated but before they've created anything meaningful. It's criminally under-designed in most MVPs.
Why it matters: this is where you either guide users to their second action (the real retention signal) or lose them to confusion. Most founders treat empty states as an afterthought — a blank canvas with maybe an "Add New" button.
In our portfolio, products with well-designed empty states see 2-3x higher D1 retention than those with lazy "no data yet" screens.
What makes an empty state work:
- Contextual guidance that explains what to do next ("Import your first dataset" not "No projects yet")
- One ultra-clear CTA that maps to your core loop ("Connect Stripe" for a fintech tool, "Add repository" for a dev tool)
- Optional: sample data or a template that lets users skip the work and see the end state
We designed an empty state for a product analytics startup that showed a sample dashboard with fake data and a "Make this yours" button. 40% of users who saw it clicked through to connect their actual data. The previous empty state ("Create your first dashboard") got 11% clickthrough.
Empty states are activation accelerators. Treat them like conversion optimization, not placeholder screens.
Screen 4: The Core Loop Interface
This is where users spend 80% of their time in your product. For a project management tool, it's the task list. For a design tool, it's the canvas. For a CRM, it's the deal pipeline.
This is the ONLY screen where you're allowed some complexity. But even here, ruthlessly cut features that don't directly support your core value prop.
What to prioritize:
- The primary action users came here to do (create, edit, view, analyze)
- Minimal navigation to other critical screens (usually just back to Screen 3)
- Feedback mechanisms that make progress visible (saved indicators, live updates, completion states)
What to defer: Customization options, advanced filters, bulk actions, keyboard shortcuts, export functions, sharing controls. All of these are legitimately useful — in month 6. In user test 1, they're distractions.
One founder came to us with a code editor MVP that had 18 toolbar buttons. We cut it to 4: Run, Save, Share, and Settings (which only controlled font size). Test feedback shifted from "too complicated" to "surprisingly simple." Two users explicitly said they'd pay for it — despite the limited feature set — because it didn't overwhelm them.
Your core loop screen should feel almost too simple. If test users don't ask for at least 2-3 missing features, you've probably over-built it.
Screen 5: The Feedback Capture Point
This is the screen most founders skip entirely, then wonder why their user tests yield vague, useless feedback.
You need a dedicated moment where you ASK users what they think. Not a passive feedback widget in the corner. A full-screen prompt that appears at a logical endpoint in your test flow.
When to show it:
- After users complete one full cycle of your core loop (created → viewed → acted)
- After 5-7 minutes of interaction (before fatigue sets in)
- At a natural stopping point (submitted a form, generated a result, completed a task)
What to ask: Two questions maximum. We've tested dozens of variations. This pair works best:
- "How disappointed would you be if you could no longer use this?" (Very / Somewhat / Not disappointed) — the Sean Ellis test, still the best early signal
- "What's the ONE thing we should build next?" (Open text field) — forces prioritization and reveals what users actually care about
- Send the test link (to 10-15 people in your target audience, ideally warm intros or community members)
- Users land on Screen 1 — 8 seconds to decide if they care
- They click through to Screen 2 — experience core value in under 60 seconds
- They see Screen 3 — empty state guides them to second action
- They use Screen 4 — complete one full cycle of your core loop
- Screen 5 appears — you capture structured feedback
- Settings (if users are asking to customize behavior)
- History/Archive view (if users need to reference past actions)
- Collaboration/sharing (if users mention wanting to show it to teammates)
- Onboarding/tutorial (if activation rate is below 60%)
- Pricing page (when you're ready to validate willingness to pay)
Optional third question if you're testing pricing: "What would you expect to pay for this?" (Open text field) — anchors value perception early.
We built a feedback capture screen for an AI writing tool that triggered after users generated their third piece of content. 78% of test users filled it out (vs. ~12% who typically click feedback widgets). The founder got 47 pieces of actionable feedback from 60 users — including 3 feature requests that made it into the actual product roadmap.
This screen is your research infrastructure. Don't skip it.
What About Everything Else?
Founders always ask: what about login? What about settings? What about profiles, notifications, help docs, pricing pages?
Here's the truth: for your FIRST user test, you probably don't need any of that.
Login: Use magic links or "Continue with Google" single-click auth. Don't build a registration flow until you know people want what you're building. We've designed MVPs that didn't have passwords at all — just email-based access tokens. Worked fine for validation.
Settings: Hard-code the defaults. You can add customization after you validate that users return for a second session. Premature settings pages are where MVPs go to die.
Profiles: Unless you're building a social product, skip them. Use the email from auth as the identifier. Build profiles when you have enough users that anonymity becomes a problem.
Help/docs: If your MVP needs documentation to be usable, your MVP isn't ready to test. Fix the UX first. Add docs later.
Pricing page: Only if you're explicitly testing willingness to pay. Otherwise, defer it. Pricing pages add cognitive load ("how much does this cost?") before users understand the value. Test value first, price second.
The 5-Screen Test Framework in Practice
Here's what your first user test should look like using this framework:
Total test time: 5-10 minutes. Total screens designed: 5. Total insights gained: enough to know whether to keep building or pivot.
One of our YC companies used this exact framework to test a B2B workflow automation tool. They designed 5 screens in Figma, connected them with basic Webflow interactions (no backend), and ran 12 user tests in 3 days. 9 out of 12 users said they'd be "very disappointed" if they couldn't use it. That signal gave them confidence to build the real product. They're now Series A.
Another founder spent 6 weeks building 40+ screens with full backend logic before testing. First user session: 8 minutes. Feedback: "I don't really get what this is for." They scrapped 80% of it.
The 5-screen MVP isn't about being minimal for minimal's sake. It's about isolating the variable that matters — does your core value proposition resonate? — without drowning it in interface.
When to Add Screen #6
After you've run 10-15 user tests and you're seeing consistent patterns — high activation rates, positive feedback, users asking for specific features — THEN you expand.
The next screens to add, in rough priority order:
But never add a screen just because it "feels like something a complete product should have." Every screen is a promise to maintain, a surface area for bugs, and a potential point of user confusion. Add screens only when user behavior demands them.
We've worked with startups that shipped with 7 screens and raised $2M. We've worked with startups that built 50 screens and pivoted before launching. Screen count is a vanity metric. User clarity is what matters.
The 5-screen MVP framework forces you to clarify your value prop, streamline your activation flow, and capture real feedback — before you've wasted months building features nobody asked for. It's not the complete product. It's the essential product. And in early-stage, essential beats complete every time.
We've designed the first version for 50+ YC and a16z companies
Our MVP design sprints focus on exactly this: identifying your 5 critical screens, designing them for maximum clarity, and setting you up to test fast. We can go from brief to testable prototype in 5-7 days — because we know which screens actually matter.
If you're struggling to decide what to design first, book a 15-minute MVP scoping call. We'll audit your current plan, tell you what to cut, and show you what the 5-screen version looks like for your specific product. No pitch, just a clear roadmap.
