Subscribe to get news update

‍How to reduce bias in user research and surveys?‍

Published on
January 9, 2025

User research is the linchpin of successful product design. It’s the foundation for understanding your audience, identifying their needs, and building solutions they’ll love. But there’s a pervasive threat that can sabotage even the most well-planned research efforts—bias.

Bias is like a smudge on your lens, distorting how you see your users and what they actually need. Worse, it’s often invisible until it’s too late, showing up as flawed conclusions, misguided product decisions, or alienated audiences. According to the Nielsen Norman Group, even subtle biases in research design can significantly impact product outcomes, emphasizing the need for deliberate and unbiased approaches.

Let’s be honest: no research is completely free of bias. The goal isn’t perfection—it’s improvement. By understanding bias and taking proactive steps to minimize its impact, you can ensure that your research delivers genuine, actionable insights.

In this article, we’ll dig deep into the different types of bias that can creep into your user research and surveys, explore their real-world implications, and share actionable strategies for reducing bias at every stage of the process. Get ready to rethink the way you approach research.

The importance of tackling bias in research

Before we tackle how to reduce bias, let’s establish why it’s such a critical issue. Research bias doesn’t just lead to incorrect data, it fundamentally undermines the trustworthiness and utility of your insights. Here’s how bias can damage your research:

1. You’ll solve the wrong problems: Biased research often focuses on what researchers think users want rather than what they actually need. This misdirection wastes resources and frustrates users.

2. Your product decisions will be skewed: If the data you base your decisions on is flawed, your product roadmap is at risk of veering off course. For example, adding unnecessary features or prioritizing low-impact changes.

3. You’ll alienate key users: Bias often excludes certain voices, especially those of marginalized or underserved groups. This leads to products that fail to resonate with or even exclude parts of your audience.

4. You’ll lose stakeholder trust: Biased insights can erode confidence in your research process. Decision-makers will be less likely to trust your findings, making it harder to advocate for user-centered design.

Understanding bias: the many faces of distortion

Bias in user research isn’t always glaringly obvious. It sneaks in through the cracks—your choice of participants, the questions you ask, and even how you interpret feedback. To tackle it head-on, we need to identify these biases and their impact. In addition, user research comes in many forms, each with its own subtle mechanisms. Here’s a breakdown of the most common types of bias, how they manifest, and why they’re so damaging: 

1. Selection bias: when your research speaks to the wrong crowd

Let’s first understand what a selection bias is: Selection bias creeps in when the participants in your research don’t accurately represent your target audience. Whether it’s due to convenience sampling or unintentional exclusion, the result is an unbalanced dataset that distorts your insights.

Example: Imagine designing a budgeting app aimed at millennials, but your participant pool consists mostly of finance professionals. Sure, they have valuable opinions, but their needs won’t reflect the everyday challenges faced by the majority of your audience.

Why does it affect your research?

When your research doesn’t reflect the diversity of your users, your product might unintentionally alienate key segments of your audience. Worse, you’ll end up prioritizing features or design choices that cater to niche groups instead of addressing broader needs.

How to stay on the right track?

Ensure your user research truly represents the diversity of your target audience:

  1. Diversify beyond the obvious: Don’t limit your recruitment to a narrow or easily accessible group. Expand your reach to include underrepresented users by exploring community groups, online forums, and local networks that align with your target audience.
  2. Spot the gaps before they matter: After recruiting participants, take a step back to analyze your sample. Are all key demographics represented? If your product caters to multiple age groups, income levels, or regions, ensure your participant pool reflects that diversity.
  3. Focus on edge cases, not just the majority: Edge-case users—those with unique challenges or needs—are a goldmine of insights. If you’re designing for beginner fitness enthusiasts, include people who’ve never stepped into a gym. Their frustrations often highlight opportunities for innovation that seasoned users might miss.

Selection bias often results in a skewed understanding of user needs, but it can be mitigated with proactive strategies. By recruiting from diverse sources, conducting demographic audits, and including voices that are often overlooked, you’ll ensure your research reflects the realities of your audience.

2. Confirmation bias: when you only see what you want to see

Confirmation bias occurs when researchers unconsciously prioritize data that aligns with their existing beliefs, assumptions, or hypotheses, while disregarding or downplaying evidence that challenges those views. It’s a cognitive shortcut, our brains naturally seek patterns and affirmations—but in user research, it can lead to one-sided insights and skewed conclusions.

This bias isn’t always obvious. It might show up in the questions you design, the way you interpret participant feedback, or even in the kinds of users you choose to recruit. Left unchecked, confirmation bias can distort your findings, leaving you with a falsely optimistic view of your product or design.

How confirmation bias manifests in research?

  1. Framing the wrong questions
    Imagine you’re testing a new feature. You might ask participants, “How much easier does this feature make your workflow?” The question itself assumes the feature has improved their experience and nudges participants toward confirming your expectation.
    What you’re doing here is seeking validation rather than uncovering areas for improvement. Instead of focusing on how the feature falls short, you might walk away with overinflated confidence in its success.
  2. Ignoring negative feedback
    During usability tests or interviews, it’s easy to rationalize critical feedback as isolated incidents or “edge cases.” If a few participants struggle with a new interface but the majority seem to grasp it, you might convince yourself that the negative feedback isn’t representative.
    However, dismissing outliers often means overlooking potential deal-breakers. Those “edge cases” might point to fundamental flaws that could snowball into larger issues once the product is live.
  3. Over-analyzing positive results
    Sometimes, confirmation bias isn’t about ignoring data but about overinterpreting it. For instance, a participant might say, “I guess this works okay,” and you might interpret their lukewarm response as glowing praise because you’re invested in the success of your design.

Why is it harmful to research?

When confirmation bias takes hold, your research stops being exploratory and becomes an exercise in self-validation. The risks include:

  • Missed opportunities: You focus on what’s working and fail to uncover areas for improvement.
  • Misguided product decisions: Decisions are made based on incomplete or overly optimistic data, leading to features that don’t solve real user problems.
  • Erosion of trust: Stakeholders lose confidence in research findings when they see that critical perspectives are ignored.

How to overcome confirmation bias?

  1. Actively seek contradictions
    Start by designing research questions that challenge your assumptions rather than affirm them. Instead of asking, Do you find this feature helpful? try:
    • What do you dislike about this feature?
    • Was there anything that didn’t work as you expected?
    • What would you change to make this better for you?

These open-ended questions encourage participants to provide honest, constructive feedback rather than defaulting to polite affirmations.

  1. Balance positives and negatives in analysis
    When reviewing feedback, make a deliberate effort to balance positive and negative comments. Create two categories: what works and what doesn’t. Then, pay equal attention to both. If the negatives outweigh the positives or point to recurring issues, take them seriously even if they contradict your initial assumptions
  2. Involve neutral observers
    Confirmation bias is often hard to spot because it’s subconscious. One way to counteract this is to involve a neutral party, someone who wasn’t involved in the project—to review the data and findings. Their objectivity can help identify patterns or insights you might have missed or dismissed.
  3. Encourage diverse perspectives
    Assemble a diverse research team with varying viewpoints. A team with diverse cultural, professional, and personal backgrounds is more likely to challenge assumptions and provide alternative interpretations of the data.
  4. Pilot test your questions
    Before rolling out your research, test your questions on a small group to identify any leading or loaded wording. This simple step ensures your study is designed to explore, not confirm.

Practical example of overcoming confirmation bias

Let’s revisit the onboarding flow example. Initially, you’re convinced it’s a success because most participants complete the process with minimal issues. However, one participant mentions that the instructions felt unclear.

Instead of dismissing this as a one-off comment, you dig deeper. You ask follow-up questions like, “Can you pinpoint where the instructions became confusing?” and “What would have made this step clearer for you?” Upon further investigation, you realize that while many users completed the onboarding, several struggled with the same step but didn’t mention it outright.

By addressing this friction point, you refine the onboarding flow and significantly improve the user experience for future users—something that wouldn’t have happened if you’d ignored the initial critique.

The takeaway

Confirmation bias is a silent disruptor in user research, but it’s not inevitable. By challenging assumptions, inviting diverse perspectives, and focusing equally on negative and positive feedback, you can ensure your insights are well-rounded and actionable. The goal isn’t to prove you’re right—it’s to uncover the truth, even when it’s uncomfortable.

3. Social desirability bias: the politeness effect

Social desirability bias occurs when participants give answers they think are more socially acceptable or align with what they believe the researcher wants to hear, rather than expressing their true feelings or behaviors. This bias stems from a natural human desire to be liked or appear "good."

It often appears in scenarios where participants feel their responses may be judged. For instance, in research about environmentally friendly habits, a participant might exaggerate their recycling efforts to seem more responsible.

Example in practice: During a survey about online behavior, a participant claims they always read the privacy policies before signing up for services. In reality, they might skim or skip them entirely but feel compelled to give an idealized response to avoid judgment.

Why it’s harmful?

Social desirability bias can result in overly optimistic data that doesn’t reflect reality. This means your research might overestimate how often users follow a particular process, understand instructions, or engage with specific features. The consequence? Products and decisions based on unrealistic user behaviors.

How to fix it?

Here are practical strategies to eliminate bias and gather authentic insights from your research participants:

  1. Use anonymous surveys
    When participants know their responses can’t be traced back to them, they’re more likely to provide honest answers. Anonymous surveys remove the pressure to conform to perceived expectations, allowing participants to share their real experiences without fear of judgment.
    For example: If you’re surveying users about their online security habits, assure them that their responses are entirely anonymous and there are no “right” or “wrong” answers.
  2. Frame questions neutrally
    Avoid questions that imply a judgment or assume a certain behavior. Instead of asking, “Why don’t you use this feature?” ask, “Can you walk me through your process?” Neutral language encourages honest responses without making participants feel defensive or self-conscious.
    For example:
    • Biased: “How often do you recycle plastic bottles?”
    • Neutral: “What do you typically do with plastic bottles after use?”
  3. Observe behaviors directly
    When possible, rely on observational methods like usability testing rather than self-reported data. Watching how participants interact with your product provides insights that may contradict what they say.
    For example: A participant might claim they always use a product’s search bar to navigate, but direct observation might reveal they actually rely on navigation menus more frequently.
  4. Normalize honest feedback
    Set the tone early in your research by letting participants know you’re seeking real insights, not ideal answers. Phrases like “It’s completely fine if you haven’t done this before” or “Your honest opinion helps us improve” can reduce the pressure to give socially desirable responses.

The takeaway

Social desirability bias can paint an overly rosy picture of user behavior. By prioritizing anonymity, asking neutral questions, and observing actual actions, you can uncover the reality behind your users’ choices and design better solutions that align with their genuine needs.

4. Observer bias: the researcher’s influence

Observer bias occurs when the presence or behavior of the researcher influences participants’ responses or actions. This bias is often unintentional but can significantly distort research findings. For instance, a researcher’s tone, body language, or subtle cues might lead participants to adjust their behavior or responses to align with what they think the researcher wants.

Example in practice: During a usability test, a participant hesitates while using a new feature. The researcher, eager to help, smiles and nods encouragingly. This subtle action unintentionally signals approval, prompting the participant to continue without expressing their true confusion.

Why it’s harmful?

Observer bias undermines the authenticity of participant feedback. Instead of capturing genuine user experiences, the data becomes skewed by the researcher’s presence, leading to inaccurate conclusions and misguided product decisions.

How to fix it?

Minimize bias in your research process with these actionable steps to ensure accurate and reliable insights:

  1. Use blind studies
    In a blind study, the researcher is unaware of certain details, such as the participant’s background or the specific hypothesis being tested. This reduces the chance of influencing participant behavior, as the researcher cannot tailor their responses or cues based on preconceived notions.
    For example: If you’re testing a prototype, the researcher should avoid knowing which features are experimental to ensure their reactions remain neutral across all scenarios.
  2. Standardize protocols
    Develop a script for interviews or usability tests and stick to it. Standardized scripts minimize the risk of ad-libbing or unintentionally leading participants. Include neutral phrases like, “Can you explain your thought process here?” instead of evaluative comments such as, “That’s great—keep going!”
  3. Record sessions for later review
    Instead of taking notes in real time, record sessions for later analysis. This allows researchers to observe without interacting and ensures they can revisit the session to capture overlooked details.
  4. Train researchers on neutrality
    All researchers conducting interviews or tests should undergo training on how to maintain a neutral tone, body language, and facial expressions. Emphasize that their role is to facilitate, not guide or influence, the session.

The takeaway

Observer bias often slips in unnoticed, but its effects can be significant. By employing blind studies, using standardized protocols, and training researchers to remain neutral, you can ensure participants’ feedback truly reflects their experiences, not the researcher’s influence.

5. Accessibility bias: excluding marginalized groups

Accessibility bias occurs when user research excludes participants with disabilities or other unique needs. This oversight, whether intentional or unintentional, results in products that fail to serve significant portions of the audience.

Example in practice: A new e-commerce website is tested exclusively on users without visual impairments. While it performs well for the general audience, users relying on screen readers struggle to navigate the interface, leading to usability complaints and accessibility compliance issues post-launch.

Why it’s harmful?

Ignoring accessibility leads to products that are not inclusive, alienating users with disabilities and potentially violating legal accessibility standards (e.g., ADA compliance in the US). It’s also a missed opportunity to create products that work seamlessly for everyone, regardless of ability.

How to fix it?

Ensure accessibility by integrating inclusivity into every stage of your usability testing process. Let's look at the following steps:

  1. Proactively recruit participants with disabilities
    Don’t wait until accessibility issues arise to address them. Make inclusivity a core part of your research process by actively recruiting participants with diverse abilities. Reach out to advocacy groups, disability organizations, and online communities to ensure these voices are represented.
  2. Test with assistive technologies
    Incorporate tools like screen readers, voice commands, and alternative input devices into your usability testing. This ensures your product works seamlessly for users relying on assistive technologies. For example:
    • Test your app’s navigation with a screen reader like NVDA or VoiceOver.
    • Evaluate voice control compatibility for users who can’t use traditional input methods.
  3. Adopt inclusive design principles
    Build accessibility into your product from the start rather than treating it as an afterthought. This means using clear visual hierarchies, offering alternative text for images, and ensuring all interactive elements are keyboard-navigable.
  4. Follow accessibility guidelines
    Use established frameworks like the Web Content Accessibility Guidelines (WCAG) to audit and improve your product. These standards provide actionable recommendations for making digital products inclusive to users with disabilities.

The takeaway

Accessibility bias isn’t just an ethical oversight—it’s a fundamental flaw in user research. By prioritizing inclusivity, testing with assistive technologies, and adhering to accessibility standards, you can create products that serve everyone effectively and equitably.

Building bias-free research processes

Reducing bias in user research is not just about identifying it after the fact—it’s about embedding safeguards at every stage of your process to prevent bias from creeping in. This requires deliberate planning, careful execution, and consistent evaluation. Let’s explore a detailed, step-by-step strategy to ensure your research stays as unbiased as possible.

1. Start with a solid plan

Every successful research effort begins with a well-thought-out plan. Before you jump into conducting interviews or surveys, take time to clarify what you’re trying to achieve. Setting clear objectives helps you stay focused and minimizes the risk of bias seeping in through vague or ill-defined goals.

Begin by asking yourself:

  • What exactly am I trying to learn?
  • Are there assumptions I’m bringing into this study?
  • How might those assumptions impact the way I frame questions or interpret responses?

Document these assumptions and challenge them throughout the research process. Acknowledge where your expectations might create blind spots, and design your questions to explore areas where you could be wrong.

Additionally, pilot testing your questions is crucial. Before launching your research, test the study design with colleagues or a small group. This helps you catch any leading, unclear, or loaded questions that could influence responses. A pilot run also reveals logistical issues, ensuring your research flows smoothly when it’s time to work with actual participants.

2. Recruit participants who represent your audience

Selecting the right participants is the backbone of reliable research. If your participant pool doesn’t reflect the diversity of your actual user base, the data you collect will be skewed and less actionable.

Start by defining detailed screening criteria that align with the characteristics of your audience. If your product targets multiple user segments, ensure your recruitment strategy captures each of these groups. For example, when researching a budgeting app, include users with varying levels of financial literacy, from novices to advanced budgeters.

Recruiting participants isn’t a one-size-fits-all process. Avoid relying on a single recruitment channel, as this often results in a narrow participant pool. Explore multiple avenues, such as local communities, online forums, and professional networks, to engage a diverse range of users. For in-person studies, consider outreach efforts in public spaces or partnering with community organizations to reach underserved groups.

Once you’ve gathered participants, audit your sample. Look closely at the demographic data: Are you including individuals of different ages, income levels, cultural backgrounds, and abilities? If you notice gaps, adjust your recruitment strategy to fill them before moving forward.

3. Conduct research with intentionality

The way you interact with participants can make or break the authenticity of your findings. To minimize bias during the research phase, standardize your approach and maintain neutrality in every interaction.

Using a script for interviews and usability tests ensures consistency across sessions. A standardized script prevents improvisation, which could inadvertently introduce bias. However, leave room for participants to elaborate—structured questions combined with open-ended follow-ups create a balance between consistency and depth.

Your tone and body language matter just as much as your questions. Be careful not to nod, smile excessively, or make evaluative comments that might guide participants toward specific responses. For example, rather than saying, “That’s an interesting point,” respond neutrally with, “Can you elaborate on that?” Neutral phrasing encourages participants to express their true thoughts without worrying about how their answers are received.

Direct observation is another powerful tool. Instead of relying solely on what participants say, focus on what they do. For instance, in usability tests, pay attention to areas where users hesitate, backtrack, or show visible frustration. These behavioral cues often reveal pain points more clearly than verbal feedback.

4. Analyze findings with precision

Bias isn’t limited to recruitment or data collection—it often sneaks into the analysis phase. Researchers may unconsciously overemphasize findings that confirm their expectations or downplay contradictory evidence.

To counteract this, use triangulation. This involves combining data from multiple sources, such as interviews, surveys, usability tests, and analytics. By comparing insights across these methods, you get a fuller picture that reduces the impact of individual biases.

Collaborate with a diverse team during analysis to spot blind spots. Different perspectives bring fresh interpretations, helping you identify patterns or inconsistencies that you might overlook alone. For example, one team member might notice a recurring frustration across user segments that others dismissed as isolated incidents.

Finally, when presenting your findings, don’t shy away from acknowledging potential limitations. Highlight areas where biases may have influenced the data or gaps in your participant pool that could affect the results. Transparency builds credibility and ensures stakeholders understand the context of your recommendations.

5. Commit to continuous improvement

Reducing bias isn’t a one-time task—it’s an ongoing commitment to refining your research process. After each study, reflect on what worked well and what didn’t. Were there moments where bias may have crept in despite your efforts? How can you adjust your methods to improve future research?

Consider creating a post-research checklist to evaluate the study’s effectiveness. Questions to ask include:

  • Did we recruit a sufficiently diverse sample?
  • Were there any leading questions in the study design?
  • Did participants feel comfortable sharing honest feedback?
  • Were our findings challenged by a neutral observer?

By consistently iterating on your process, you’ll build a robust framework for reducing bias and delivering high-quality insights over time.

How a budgeting app could address bias in user research

Imagine a team tasked with redesigning a budgeting app. They approach their research with inclusivity and rigor, recruiting a diverse participant pool that includes users with limited financial literacy and those unfamiliar with budgeting tools.

During usability testing, two key issues emerge:

  1. The app’s jargon is too complex, confusing less experienced users.
  2. Navigation features aren’t optimized for users with visual impairments.

Rather than dismissing these challenges as niche problems, the team prioritizes them in the redesign. They simplify the app’s language to make it accessible to beginners and ensure compatibility with screen readers and other assistive technologies.

The result is a more inclusive product that gains widespread adoption, meeting the needs of both advanced and novice users. This approach not only broadens the app’s appeal but also reflects the team’s commitment to designing with empathy and reducing bias.

Conclusion: bias reduction as a continuous process

Reducing bias in user research is not a single task—it’s a mindset. By embedding safeguards into every stage of the process, from planning to analysis, you create a framework for research that captures authentic, actionable insights.

Take the first step today: Audit your current practices, identify where bias might be lurking, and implement one new strategy to address it. Over time, these incremental changes will transform your research process into one that prioritizes inclusivity, diversity, and honesty.

Great research doesn’t strive for perfection—it strives for truth. By committing to reducing bias, you’ll build stronger connections with your users and create products that truly meet their needs.