Stop treating user interviews as one-off projects. Learn how to build a sustainable, continuous interview program that keeps your team connected to users and drives better product decisions.
.png)
Even experienced teams make these user interview mistakes. Learn the 5 most common errors that lead to bad insights, and the simple fixes that get you back on track.
You’ve scheduled 10 user interviews. You have your questions ready. You’re excited to finally talk to real users.
Then the interviews happen, and somehow you walk away with… nothing useful. Vague feedback. Feature requests. Opinions that contradict each other.
What went wrong?
Most likely, you fell into one of five common traps that plague even experienced researchers. Many organizations hesitate to invest in user research due to perceived time and cost, but skipping this step can result in higher costs and less effective products in the long run. The good news? These mistakes are easy to fix once you know what to look for.
This guide reveals the most common user interview mistakes and shows you exactly how to avoid them—so your next round of interviews actually uncovers insights worth acting on. A well-run research study can prevent costly mistakes and help optimize user experience. This article serves as a comprehensive resource to help you avoid user interview mistakes and improve your research outcomes.
Leading questions telegraph the “right” answer or suggest what you want to hear. They bias responses and give you false validation instead of truth. Paying attention to the detail of how questions are phrased is crucial to avoid leading users.
Bad examples:
These questions aren’t genuine inquiries—they’re confirmation seeking. Users will politely agree, and you’ll think you’ve validated your idea when you’ve actually just led a witness. Overly broad questions can also introduce bias and make it harder to get specific, actionable feedback.
You get socially desirable answers, not truth.
People want to be helpful. They want to please you. When you signal what answer you’re looking for, most people will give it to you—even if it doesn’t reflect their real experience.
Example:
You: “Don’t you find it frustrating when tools have too many features?” User: “Oh yeah, totally.” (They agree because you led them)
Reality: The user actually loves feature-rich tools and spends hours exploring advanced functionality. But your leading question got them to agree with your anti-feature-bloat hypothesis.
The result? You build the wrong thing based on false validation. Sometimes, leading questions can have the opposite effect: users may agree just to please you, while actually hiding their true opinions.
Use open-ended, neutral questions that don’t suggest an answer.
Before your interview, it’s important to have a clear plan for your questions to ensure they are open-ended and neutral. Planning your questions in advance helps you avoid bias and gather more genuine insights.
✅ Better alternatives:
Notice the difference? These questions don’t signal what you want to hear. They invite honest description of actual experience.
Leading questions trigger acquiescence bias—the tendency to agree with questioners, especially when there’s a power dynamic (interviewer/interviewee) or social pressure.
There is a risk of unintentionally biasing user responses due to acquiescence bias, which can lead to inaccurate or misleading data.
Research shows: People agreeing with you doesn’t mean they actually believe what they’re saying. It often just means your question was leading.
Startup building a productivity app:
What they asked (leading): “Wouldn’t it be great if you could see all your tasks in one place?” What users said: “Yes, that would be amazing!” What they built: All-in-one task dashboard What happened: Low adoption, users kept using separate tools
What they should have asked (neutral): “Walk me through how you currently manage your tasks.” What they would have learned: Users intentionally separate work/personal tasks across different tools for mental boundaries. Finding these underlying user needs is only possible with unbiased, open-ended questions.
❌ Does it start with “Don’t you think…” or “Wouldn’t you…”?
❌ Does it include “most people” or “everyone else”?
❌ Does it suggest the right answer?
❌ Can the user tell what you want to hear?
❌ Is the question irrelevant to the user's actual experience or needs?
If you answered yes to any of these, rephrase the question to be neutral.
You’re the interviewer, but somehow you’re doing most of the talking. You explain your product, defend design decisions, or fill every silence with words.
Remember, the focus should be on understanding the person you’re interviewing, not on showcasing your own knowledge.
Signs you’re talking too much:
Dominating the conversation limits your ability to communicate your true thoughts and experiences.
The 80/20 rule: The participant should talk 80% of the time, you should talk 20%. If it’s closer to 50/50 or worse, you’re talking too much.
You learn nothing new.
The whole point of user interviews is to hear their perspective, not reinforce yours. When you dominate the conversation, you:
You make users passive.
The more you talk, the more they wait for you to finish. They become listeners instead of storytellers. You've flipped the dynamic.
Embrace silence. Ask questions and then shut up.
It takes conscious effort to resist the urge to fill silences and instead let users think.
The power of silence:
After asking a question, resist the urge to fill the silence. Count to five before saying anything. Often, the best insights come after a pause—when the user has time to think deeply.
Technique: minimal encouragers
Learn more about how to recruit the right participants for research to ensure your research techniques yield the best insights.
Instead of talking, use short prompts to keep them talking:
Record yourself and review.
After your next interview, watch the recording and track:
Goal: 80/20 split. If you’re talking more than 30% of the time, you’re dominating. For product managers interested in understanding user perspectives and improving collaboration, exploring UX research methods can be highly beneficial.
Design team testing a prototype:
What happened (too much talking): Designer: “So this is the new dashboard. We designed it to be super intuitive, with all the key metrics front and center. We thought about putting it here because that’s where your eye naturally goes, and we wanted to make sure you could see…” [5 minutes later] Designer: “…so what do you think?” User: “Yeah, looks good.”
What they learned: Nothing. The user was overwhelmed and disengaged. No valuable insights were found because the user wasn't given space to share their thoughts.
How it should have gone (minimal talking): Designer: “Here’s the dashboard. Take a look and tell me your first impression.” [Silence for 10 seconds] User: “Hmm, I’m not sure what these numbers mean.” Designer: “Tell me more about that.” User: “Well, I can see revenue, but I don’t know if that’s this month or this year, or if it includes refunds…”
What they learned: Metric definitions aren’t clear. Actionable insight.
Cognitive load theory: The more you talk, the more mental bandwidth users spend processing your words instead of reflecting on their own experience.
Less talking from you = more thinking from them = better insights. Giving users time to think allows you to gather deeper, more meaningful insights.
Asking users to predict what they would do in hypothetical future situations. While answering these types of questions may seem reasonable, it often leads to unreliable predictions.
Common future-focused questions:
These are hypothetical questions about imagined future behavior. Being prepared with questions about past behavior instead often yields more reliable and actionable data.
People are terrible at predicting their future behavior.
Research is clear: What people say they’ll do and what they actually do are often completely different. Relying on hypothetical answers can cause you to fail in understanding real user needs.
Famous examples:
The gym membership paradox: People sign up for annual memberships believing they’ll go 3x per week. Average actual usage: 1x per week.
The Spotify paradox: In surveys, users said they’d never pay for music streaming. Then Spotify launched, and millions subscribed.
Your product is no different. Users genuinely believe they’d pay $50/month for your tool. But when you launch, they don’t convert. They weren’t lying—they just couldn’t accurately predict their future selves.
Social desirability bias: People want to appear helpful, rational, and committed. Saying "Yes, I'd pay for that" feels supportive. Actually paying is different.
Imagination gap: Future scenarios are abstract. Real decisions involve context, constraints, and competing priorities that users can't fully imagine during an interview.
Ask about past behavior, not future hypotheticals.
Past behavior is the best predictor of future behavior. Focus on what they’ve actually done. This approach helps you achieve more accurate and actionable insights.
Instead of future questions, ask:
❌ “Would you pay $50/month for this?”
✅ “What tools are you currently paying for? What made you decide they were worth the investment?”
❌ “Would you use this feature?”
✅ “Tell me about the last time you tried to [accomplish related task]. What did you do?”
❌ “Would you switch from your current tool?”
✅ “Have you ever switched tools in this category? What made you switch?”
❌ “If we added [X], would you use it more often?”
✅ “What’s the last feature you tried in [current tool]? Why did you start using it?”
SaaS startup validating pricing:
Wrong approach (hypothetical): "If we priced this at $99/month, would you buy it?"
Users: "Absolutely!"
Result: Launched at $99/month, conversion rate: 2%
Right approach (past behavior): "What software tools are you currently paying for?"
Users: "Slack ($12/user), HubSpot ($50), Mailchimp ($30)..."
"What's the most expensive tool you use?"
Users: "Probably HubSpot at $50/month, but it's essential for sales."
"What made you decide HubSpot was worth $50?"
Users: "Our sales team lives in it. Without it, we'd lose deals."
Insight: Tools worth $50+ must be mission-critical, used daily by revenue teams. Price accordingly and position for those use cases.
Showing users a prototype and asking them to interact with it is different from hypotheticals.
✅ “Try creating a new project. Tell me what you’re thinking as you do it.” (This is behavioral observation, not future prediction) Asking users to interact with familiar tasks or interfaces often leads to more reliable feedback, as participants are more comfortable and can provide insights based on their existing experience.
❌ “If this feature existed, would you use it?” (This is hypothetical prediction)
A user says something intriguing, but you move on to your next scripted question instead of exploring deeper. You stick rigidly to your interview guide and miss golden insights.
Showing genuine interest in the user's answers encourages them to open up and share more valuable information.
User: “Yeah, I’ve tried three different tools for this but abandoned them all.”
Bad interviewer: [Moves to next question] “Okay, so question 5, how often do you…”
Good interviewer: “Oh interesting—tell me more about that. Why did you abandon them?”
The difference? The bad interviewer followed the script. The good interviewer followed the insight. Asking follow-up questions helps uncover important details that would otherwise be missed.
The best insights are usually hidden beneath surface-level answers. The most valuable findings often emerge only after several layers of follow-up questions.
The first thing someone says is rarely the full truth. It’s the tip of the iceberg. Your job is to dive deeper.
The “5 Whys” technique exists for a reason: Each follow-up question peels back another layer, getting closer to the root cause.
Example of following up:
User: “I find project management tools frustrating.”
Level 1: “What specifically frustrates you?” User: “They’re too complicated.”
Level 2: “Tell me more: what makes them complicated?” User: “Too many features I don’t need.”
Level 3: “What features do you actually use?” User: “Honestly, just task lists and due dates.”
Level 4: “Why do you think you only use those features?” User: “Everything else just gets in the way. I want to get in, see what’s due, and get out.”
Now you’ve uncovered the real insight: Users want simplicity and speed, not feature bloat. You would have missed this if you’d moved on after “they’re too complicated.”
Have a flexible interview guide, not a rigid script.
Your guide is a framework, not a straightjacket. When you hear something interesting:
✅ Pursue it with follow-up questions
✅ Spend more time on what matters
✅ Skip or rush through less relevant questions
Power follow-up questions:
When to follow up:
Effective follow-up questions help clarify next steps for both the research and product teams, ensuring everyone is aligned on how to move forward.
Product team validating integration needs:
Surface-level stopping: User: “Integration with our other tools would be nice.” Interviewer: “Got it.” [Moves on] Result: Team adds “integrations” to backlog with no priority or clarity.
Deep diving: User: “Integration with our other tools would be nice.” Interviewer: “Tell me more. What tools specifically?” User: “Mainly Salesforce.” Interviewer: “How would that integration work in your workflow?” User: “Right now I export a CSV from here, then upload to Salesforce. Takes 30 minutes, I do it daily.” Interviewer: “What happens if you don’t do that?” User: “Our sales team can’t follow up with leads. We’ve lost deals because of delays.”
Result: Team now understands this isn’t a “nice to have”—it’s mission-critical. Salesforce integration jumps to top priority. Understanding the user's workflow and related projects can reveal critical integration needs that might otherwise be overlooked.
You selectively pay attention to data that confirms your hypothesis and ignore everything that contradicts it. This happens during interviews and especially during analysis.
In a business context, confirmation bias can lead to poor business decisions and missed opportunities by causing professionals to overlook critical information that contradicts their assumptions.
During interviews:
During analysis:
You build what you want to build, not what users need.
Confirmation bias is the enemy of learning. If you already “know” the answer, why interview users at all?
Real scenario:
Founder believes users want an AI chatbot for customer support.
In interviews: Learn more about effective strategies to recruit participants for user research studies.
Result: Founder builds AI chatbot, users don’t use it. Turns out they wanted better phone support, which was mentioned but ignored.
Ultimately, the goal of user research is to build products that truly meet user needs, not just validate assumptions.
Actively look for disconfirming evidence.
Before analyzing, ask:
During analysis:
✅ Track both positive and negative mentions
✅ Count frequency objectively (don’t weight toward your preference)
✅ Have someone else analyze the data independently
✅ Present conflicting evidence to stakeholders, not just supporting evidence
Use structured analysis:
Instead of free-form note-taking, use frameworks:
Involving stakeholders in the analysis process helps provide recommendations that are practical and informed by real user needs.
Treat your product hypothesis like a scientific hypothesis:
Being wrong is a win. Better to learn now than after you’ve built it.
Mobile app redesign:
Team’s belief: “Users want a minimalist, gesture-based interface”
Interviews revealed:
Confirmation bias path:Team highlights the 2 positive mentions, builds gesture-heavy UI, users complain it’s hard to use. This approach often fails to deliver a product users actually want.
Objective analysis path:Team sees 8/10 prefer buttons, kills gesture idea, builds clear UI, users love it
The difference? Willingness to kill your darlings when data says to.
Before your next interview, print this and keep it visible:
Before the interview:
During the interview:
After the interview:
These five mistakes are common because they're human nature. We want to be validated. We want to be right. We want users to love our ideas.
But great product teams prioritize truth over validation.
Fix these mistakes, and you'll:
The hardest part isn't conducting interviews. It's being open to what they reveal—even when it contradicts what you hoped to hear.
Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.
Book a demoJoin paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.
Sign up as an expert