Subscribe to get news update
User Interviews
October 24, 2025

5 common user interview mistakes that ruin your research (and how to avoid them)

Even experienced teams make these user interview mistakes. Learn the 5 most common errors that lead to bad insights, and the simple fixes that get you back on track.

You’ve scheduled 10 user interviews. You have your questions ready. You’re excited to finally talk to real users.

Then the interviews happen, and somehow you walk away with… nothing useful. Vague feedback. Feature requests. Opinions that contradict each other.

What went wrong?

Most likely, you fell into one of five common traps that plague even experienced researchers. Many organizations hesitate to invest in user research due to perceived time and cost, but skipping this step can result in higher costs and less effective products in the long run. The good news? These mistakes are easy to fix once you know what to look for.

This guide reveals the most common user interview mistakes and shows you exactly how to avoid them—so your next round of interviews actually uncovers insights worth acting on. A well-run research study can prevent costly mistakes and help optimize user experience. This article serves as a comprehensive resource to help you avoid user interview mistakes and improve your research outcomes.

Mistake #1: asking leading questions

What it is

Leading questions telegraph the “right” answer or suggest what you want to hear. They bias responses and give you false validation instead of truth. Paying attention to the detail of how questions are phrased is crucial to avoid leading users.

What it looks like

Bad examples:

  • “Don’t you think this feature would be useful?”
  • “Wouldn’t it be better if we added [X]?”
  • “You probably find [task] frustrating, right?”
  • “Most people want [Y]. Do you agree?”

These questions aren’t genuine inquiries—they’re confirmation seeking. Users will politely agree, and you’ll think you’ve validated your idea when you’ve actually just led a witness. Overly broad questions can also introduce bias and make it harder to get specific, actionable feedback.

Why it's harmful

You get socially desirable answers, not truth.

People want to be helpful. They want to please you. When you signal what answer you’re looking for, most people will give it to you—even if it doesn’t reflect their real experience.

Example:

You: “Don’t you find it frustrating when tools have too many features?” User: “Oh yeah, totally.” (They agree because you led them)

Reality: The user actually loves feature-rich tools and spends hours exploring advanced functionality. But your leading question got them to agree with your anti-feature-bloat hypothesis.

The result? You build the wrong thing based on false validation. Sometimes, leading questions can have the opposite effect: users may agree just to please you, while actually hiding their true opinions.

How to fix it

Use open-ended, neutral questions that don’t suggest an answer.

Before your interview, it’s important to have a clear plan for your questions to ensure they are open-ended and neutral. Planning your questions in advance helps you avoid bias and gather more genuine insights.

Better alternatives:

  • “Tell me about your experience with [feature]”
  • “How do you currently handle [task]?”
  • “Walk me through the last time you used [tool]”
  • “What’s your biggest challenge with [process]?”

Notice the difference? These questions don’t signal what you want to hear. They invite honest description of actual experience.

The psychology behind it

Leading questions trigger acquiescence bias—the tendency to agree with questioners, especially when there’s a power dynamic (interviewer/interviewee) or social pressure.

There is a risk of unintentionally biasing user responses due to acquiescence bias, which can lead to inaccurate or misleading data.

Research shows: People agreeing with you doesn’t mean they actually believe what they’re saying. It often just means your question was leading.

Real-world example

Startup building a productivity app:

What they asked (leading): “Wouldn’t it be great if you could see all your tasks in one place?” What users said: “Yes, that would be amazing!” What they built: All-in-one task dashboard What happened: Low adoption, users kept using separate tools

What they should have asked (neutral): “Walk me through how you currently manage your tasks.” What they would have learned: Users intentionally separate work/personal tasks across different tools for mental boundaries. Finding these underlying user needs is only possible with unbiased, open-ended questions.

Quick checklist: is your question leading?

❌ Does it start with “Don’t you think…” or “Wouldn’t you…”?
❌ Does it include “most people” or “everyone else”?
❌ Does it suggest the right answer?
❌ Can the user tell what you want to hear?
❌ Is the question irrelevant to the user's actual experience or needs?

If you answered yes to any of these, rephrase the question to be neutral.

Mistake #2: talking too much

What it is

You’re the interviewer, but somehow you’re doing most of the talking. You explain your product, defend design decisions, or fill every silence with words.

Remember, the focus should be on understanding the person you’re interviewing, not on showcasing your own knowledge.

What it looks like

Signs you’re talking too much:

  • You spend 10 minutes explaining your product before asking questions
  • You jump in to clarify immediately after they speak
  • You feel uncomfortable with silence and rush to fill it
  • You defend or justify when they critique your product
  • You realize you’re explaining more than listening

Dominating the conversation limits your ability to communicate your true thoughts and experiences.

The 80/20 rule: The participant should talk 80% of the time, you should talk 20%. If it’s closer to 50/50 or worse, you’re talking too much.

Why it's harmful

You learn nothing new.

The whole point of user interviews is to hear their perspective, not reinforce yours. When you dominate the conversation, you:

  • Miss the user's actual mental model
  • Bias their answers with your explanations
  • Run out of time for their stories
  • Come away with opinions instead of insights

You make users passive.

The more you talk, the more they wait for you to finish. They become listeners instead of storytellers. You've flipped the dynamic.

How to fix it

Embrace silence. Ask questions and then shut up.

It takes conscious effort to resist the urge to fill silences and instead let users think.

The power of silence:

After asking a question, resist the urge to fill the silence. Count to five before saying anything. Often, the best insights come after a pause—when the user has time to think deeply.

Technique: minimal encouragers

Learn more about how to recruit the right participants for research to ensure your research techniques yield the best insights.

Instead of talking, use short prompts to keep them talking:

  • “Tell me more”
  • “And then?”
  • “Interesting…”
  • “Go on”
  • Nod silently

Record yourself and review.

After your next interview, watch the recording and track:

  • Who’s talking (you vs. them)
  • How long each speaks
  • How often you interrupt
  • How comfortable you are with silence

Goal: 80/20 split. If you’re talking more than 30% of the time, you’re dominating. For product managers interested in understanding user perspectives and improving collaboration, exploring UX research methods can be highly beneficial.

Real-world example

Design team testing a prototype:

What happened (too much talking): Designer: “So this is the new dashboard. We designed it to be super intuitive, with all the key metrics front and center. We thought about putting it here because that’s where your eye naturally goes, and we wanted to make sure you could see…” [5 minutes later] Designer: “…so what do you think?” User: “Yeah, looks good.”

What they learned: Nothing. The user was overwhelmed and disengaged. No valuable insights were found because the user wasn't given space to share their thoughts.

How it should have gone (minimal talking): Designer: “Here’s the dashboard. Take a look and tell me your first impression.” [Silence for 10 seconds] User: “Hmm, I’m not sure what these numbers mean.” Designer: “Tell me more about that.” User: “Well, I can see revenue, but I don’t know if that’s this month or this year, or if it includes refunds…”

What they learned: Metric definitions aren’t clear. Actionable insight.

The science

Cognitive load theory: The more you talk, the more mental bandwidth users spend processing your words instead of reflecting on their own experience.

Less talking from you = more thinking from them = better insights. Giving users time to think allows you to gather deeper, more meaningful insights.

Mistake #3: asking about future behavior

What it is

Asking users to predict what they would do in hypothetical future situations. While answering these types of questions may seem reasonable, it often leads to unreliable predictions.

What it looks like

Common future-focused questions:

  • “Would you use this feature?”
  • “How much would you pay for this?”
  • “Would you recommend this to a colleague?”
  • “If we built [X], would you switch from your current tool?”
  • “What would you do if…?”

These are hypothetical questions about imagined future behavior. Being prepared with questions about past behavior instead often yields more reliable and actionable data.

Why it's harmful

People are terrible at predicting their future behavior.

Research is clear: What people say they’ll do and what they actually do are often completely different. Relying on hypothetical answers can cause you to fail in understanding real user needs.

Famous examples:

The gym membership paradox: People sign up for annual memberships believing they’ll go 3x per week. Average actual usage: 1x per week.

The Spotify paradox: In surveys, users said they’d never pay for music streaming. Then Spotify launched, and millions subscribed.

Your product is no different. Users genuinely believe they’d pay $50/month for your tool. But when you launch, they don’t convert. They weren’t lying—they just couldn’t accurately predict their future selves.

The psychology

Social desirability bias: People want to appear helpful, rational, and committed. Saying "Yes, I'd pay for that" feels supportive. Actually paying is different.

Imagination gap: Future scenarios are abstract. Real decisions involve context, constraints, and competing priorities that users can't fully imagine during an interview.

How to fix it

Ask about past behavior, not future hypotheticals.

Past behavior is the best predictor of future behavior. Focus on what they’ve actually done. This approach helps you achieve more accurate and actionable insights.

Instead of future questions, ask:

❌ “Would you pay $50/month for this?”
✅ “What tools are you currently paying for? What made you decide they were worth the investment?”

❌ “Would you use this feature?”
✅ “Tell me about the last time you tried to [accomplish related task]. What did you do?”

❌ “Would you switch from your current tool?”
✅ “Have you ever switched tools in this category? What made you switch?”

❌ “If we added [X], would you use it more often?”
✅ “What’s the last feature you tried in [current tool]? Why did you start using it?”

Real-world example

SaaS startup validating pricing:

Wrong approach (hypothetical): "If we priced this at $99/month, would you buy it?"
Users: "Absolutely!"
Result: Launched at $99/month, conversion rate: 2%

Right approach (past behavior): "What software tools are you currently paying for?"
Users: "Slack ($12/user), HubSpot ($50), Mailchimp ($30)..."
"What's the most expensive tool you use?"
Users: "Probably HubSpot at $50/month, but it's essential for sales."
"What made you decide HubSpot was worth $50?"
Users: "Our sales team lives in it. Without it, we'd lose deals."

Insight: Tools worth $50+ must be mission-critical, used daily by revenue teams. Price accordingly and position for those use cases.

Exception: testing prototypes

Showing users a prototype and asking them to interact with it is different from hypotheticals.

✅ “Try creating a new project. Tell me what you’re thinking as you do it.” (This is behavioral observation, not future prediction) Asking users to interact with familiar tasks or interfaces often leads to more reliable feedback, as participants are more comfortable and can provide insights based on their existing experience.

❌ “If this feature existed, would you use it?” (This is hypothetical prediction)

Mistake #4: not following up on interesting answers

What it is

A user says something intriguing, but you move on to your next scripted question instead of exploring deeper. You stick rigidly to your interview guide and miss golden insights.

Showing genuine interest in the user's answers encourages them to open up and share more valuable information.

What it looks like

User: “Yeah, I’ve tried three different tools for this but abandoned them all.”

Bad interviewer: [Moves to next question] “Okay, so question 5, how often do you…”

Good interviewer: “Oh interesting—tell me more about that. Why did you abandon them?”

The difference? The bad interviewer followed the script. The good interviewer followed the insight. Asking follow-up questions helps uncover important details that would otherwise be missed.

Why it's harmful

The best insights are usually hidden beneath surface-level answers. The most valuable findings often emerge only after several layers of follow-up questions.

The first thing someone says is rarely the full truth. It’s the tip of the iceberg. Your job is to dive deeper.

The “5 Whys” technique exists for a reason: Each follow-up question peels back another layer, getting closer to the root cause.

Example of following up:

User: “I find project management tools frustrating.”

Level 1: “What specifically frustrates you?” User: “They’re too complicated.”

Level 2: “Tell me more: what makes them complicated?” User: “Too many features I don’t need.”

Level 3: “What features do you actually use?” User: “Honestly, just task lists and due dates.”

Level 4: “Why do you think you only use those features?” User: “Everything else just gets in the way. I want to get in, see what’s due, and get out.”

Now you’ve uncovered the real insight: Users want simplicity and speed, not feature bloat. You would have missed this if you’d moved on after “they’re too complicated.”

How to fix it

Have a flexible interview guide, not a rigid script.

Your guide is a framework, not a straightjacket. When you hear something interesting:

✅ Pursue it with follow-up questions
✅ Spend more time on what matters
✅ Skip or rush through less relevant questions

Power follow-up questions:

  • “Tell me more about that”
  • “Why is that important to you?”
  • “Can you give me a specific example?”
  • “What happened next?”
  • “How did that make you feel?”
  • “Walk me through exactly what you did”

When to follow up:

  • Strong emotion (frustration, excitement)
  • Unexpected answers
  • Contradictions
  • Stories and anecdotes
  • Workarounds and hacks
  • “I’ve tried X but it didn’t work”

Effective follow-up questions help clarify next steps for both the research and product teams, ensuring everyone is aligned on how to move forward.

Real-world example

Product team validating integration needs:

Surface-level stopping: User: “Integration with our other tools would be nice.” Interviewer: “Got it.” [Moves on] Result: Team adds “integrations” to backlog with no priority or clarity.

Deep diving: User: “Integration with our other tools would be nice.” Interviewer: “Tell me more. What tools specifically?” User: “Mainly Salesforce.” Interviewer: “How would that integration work in your workflow?” User: “Right now I export a CSV from here, then upload to Salesforce. Takes 30 minutes, I do it daily.” Interviewer: “What happens if you don’t do that?” User: “Our sales team can’t follow up with leads. We’ve lost deals because of delays.”

Result: Team now understands this isn’t a “nice to have”—it’s mission-critical. Salesforce integration jumps to top priority. Understanding the user's workflow and related projects can reveal critical integration needs that might otherwise be overlooked.

Mistake #5: confirmation bias in analysis

What it is

You selectively pay attention to data that confirms your hypothesis and ignore everything that contradicts it. This happens during interviews and especially during analysis.

In a business context, confirmation bias can lead to poor business decisions and missed opportunities by causing professionals to overlook critical information that contradicts their assumptions.

What it looks like

During interviews:

  • Perking up when users say what you expected
  • Glossing over contradictory feedback. Ignoring feedback that contradicts your assumptions can lead to product failure and wasted resources.
  • Leading questions to get the answer you want (see Mistake #1)

During analysis:

  • Cherry-picking quotes that support your idea
  • Ignoring patterns that contradict your hypothesis
  • Rationalizing away negative feedback (“They just don’t understand”)
  • Highlighting 2 positive mentions, ignoring 8 negative ones

Why it's harmful

You build what you want to build, not what users need.

Confirmation bias is the enemy of learning. If you already “know” the answer, why interview users at all?

Real scenario:

Founder believes users want an AI chatbot for customer support.

In interviews: Learn more about effective strategies to recruit participants for user research studies.

  • User mentions chatbots are frustrating → Founder thinks, “They just haven’t seen a good one”
  • User says they prefer email support → Founder thinks, “They’re just old-school”
  • User lights up talking about phone support → Founder doesn’t pursue this thread

Result: Founder builds AI chatbot, users don’t use it. Turns out they wanted better phone support, which was mentioned but ignored.

Ultimately, the goal of user research is to build products that truly meet user needs, not just validate assumptions.

How to fix it

Actively look for disconfirming evidence.

Before analyzing, ask:

  • What would prove my hypothesis wrong?
  • Am I seeing what I want to see?
  • What patterns contradict my assumptions?

During analysis:

✅ Track both positive and negative mentions
✅ Count frequency objectively (don’t weight toward your preference)
✅ Have someone else analyze the data independently
✅ Present conflicting evidence to stakeholders, not just supporting evidence

Use structured analysis:

Instead of free-form note-taking, use frameworks:

  • Affinity mapping: Cluster ALL insights, not just favorites
  • Frequency counting: 8/10 said X, 2/10 said Y (show both)
  • Pain point matrices: Plot problems by frequency AND intensity

Involving stakeholders in the analysis process helps provide recommendations that are practical and informed by real user needs.

The scientific method for research

Treat your product hypothesis like a scientific hypothesis:

  1. State the hypothesis clearly: “Users need feature X”
  2. Define what would disprove it: “If 7/10 users don’t mention this problem, hypothesis is invalid”
  3. Collect data without bias
    Conducting research with clear, objective goals helps ensure reliable and meaningful results.
  4. Analyze objectively
  5. Accept if you’re wrong

Being wrong is a win. Better to learn now than after you’ve built it.

Real-world example

Mobile app redesign:

Team’s belief: “Users want a minimalist, gesture-based interface”

Interviews revealed:

  • 2/10 users liked gesture navigation
  • 8/10 users preferred visible buttons (“I don’t want to guess where to tap”)
  • 6/10 users mentioned being confused by “hidden” features

Confirmation bias path:Team highlights the 2 positive mentions, builds gesture-heavy UI, users complain it’s hard to use. This approach often fails to deliver a product users actually want.

Objective analysis path:Team sees 8/10 prefer buttons, kills gesture idea, builds clear UI, users love it

The difference? Willingness to kill your darlings when data says to.

How to avoid all 5 mistakes: a checklist

Before your next interview, print this and keep it visible:

Before the interview:

  • [ ] Questions are open-ended and neutral (not leading)
  • [ ] Guide is flexible, not a rigid script
  • [ ] Questions focus on past behavior, not hypotheticals

During the interview:

  • [ ] I'm talking less than 30% of the time
  • [ ] I'm comfortable with silence
  • [ ] When something's interesting, I'm following up (not moving on)
  • [ ] I'm noticing when users contradict my hypothesis

After the interview:

  • [ ] I'm tracking both confirming and disconfirming evidence
  • [ ] I'm using structured analysis methods
  • [ ] I'm involving others in analysis to reduce bias
  • [ ] I'm willing to change my mind based on evidence

Conclusion: better interviews = better products

These five mistakes are common because they're human nature. We want to be validated. We want to be right. We want users to love our ideas.

But great product teams prioritize truth over validation.

Fix these mistakes, and you'll:

  • Get honest feedback instead of polite agreement
  • Uncover real insights, not feature requests
  • Build products users actually need
  • Make fewer costly wrong bets

The hardest part isn't conducting interviews. It's being open to what they reveal—even when it contradicts what you hoped to hear.

Ready to act on your research goals?

If you’re a researcher, run your next study with CleverX

Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.

Book a demo
If you’re a professional, get paid for your expertise

Join paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.

Sign up as an expert