Learn how to use ranking questions effectively in surveys and research. This article covers when ranking works better than rating, how many items to include, analysis techniques, and proven examples from successful product teams.

Learn which usability testing method works best for your product goals. This article covers moderated and unmoderated testing, remote and in-person sessions, with examples from successful product teams.
Your product team decides to run usability testing. You recruit participants, write tasks, schedule sessions. Two weeks and $5,000 later, you have results that don’t actually answer your most important questions. The problem isn’t the quality of your research. It’s that you chose the wrong testing method for what you needed to learn.
Airbnb discovered this the hard way in 2014. They ran moderated usability tests with 20 users to understand why conversion rates were dropping on their mobile app. After weeks of research, they found several usability issues but still couldn’t explain the 15% conversion drop.
So they switched methods. They deployed unmoderated testing with 500 users across different devices and geographic locations. There are many usability methods available for collecting user feedback efficiently and cost-effectively, and choosing the right one can make a significant difference. Within three days, they identified the real issue: a localization bug that only appeared for users in specific countries with certain Android versions. Moderated testing with 20 users could never have caught this edge case.
The method you choose determines what you can discover. Moderated testing excels at understanding the “why” behind behavior. This is an example of qualitative usability testing, which focuses on observing user behavior and motivations. In contrast, unmoderated testing is often used for quantitative usability testing, measuring user performance, task completion rates, and efficiency metrics to gather measurable data. Remote testing captures natural environments. In-person testing captures body language and emotional responses.
There’s no universal “best” method. There’s only the right method for your specific research questions, product stage, timeline, and budget. Selecting the appropriate usability methods leads to valuable insights that drive product improvement.
Before diving into which method to choose, you need to understand what actually differs between approaches. Usability testing varies along two primary dimensions: moderation and location. There are different usability testing methods and types of usability testing, which can be categorized based on format (such as moderated vs. unmoderated, remote vs. in-person), goals, and methodology. Selecting the right method depends on your project’s objectives, available resources, and the specific testing scenario.
Moderation refers to whether a researcher actively facilitates the session. In moderated testing, you’re present (physically or virtually) guiding the participant, asking follow-up questions, and probing deeper when you see something interesting. In unmoderated testing, participants complete tasks independently while their session is recorded for later review.
Location refers to where testing happens. In-person testing means you and the participant are in the same physical space. Remote testing means you’re in different locations, connected through screen sharing and video conferencing tools.
These two dimensions create four distinct testing approaches, each with different strengths, costs, and ideal use cases. Understanding these differences is critical because choosing the wrong approach wastes time and money while missing crucial insights.
The testing landscape has shifted dramatically since 2020. Before the pandemic, most enterprise companies defaulted to in-person moderated testing. Post-2020, remote moderated testing became standard, and unmoderated testing exploded in popularity as teams needed faster insights with distributed teams.
Current industry data from UserTesting’s 2024 research report shows that 68% of product teams now use remote moderated testing as their primary method, 45% regularly use unmoderated testing for validation, and only 15% still conduct majority in-person sessions. The shift reflects both practical constraints and the realization that remote methods often provide better, more natural context. These changes are part of a broader evolution in usability research, which now encompasses a wide range of methods including interviews, surveys, and field studies to uncover user needs, behaviors, and expectations beyond traditional usability testing.
Moderated usability testing involves a researcher actively guiding the session, observing participants, asking questions, and adapting in real-time. This method provides deep qualitative insights by uncovering the reasons behind user behavior. It’s ideal for complex products, early design stages, or when understanding user mental models is crucial.
Slack used moderated testing to reveal users’ conceptual confusion about “workspaces” during onboarding—insights that analytics or unmoderated tests missed. Sessions typically last 45-60 minutes with 5-8 flexible tasks. Recording is essential for later review.
The main advantage is adaptability, allowing exploration of emerging issues. The downside is limited scale and high resource demands, making it impractical for testing large user groups or minor iterations.
Unmoderated usability testing means participants complete tasks independently without a researcher present. You create a predetermined set of tasks and questions, participants receive instructions through a testing platform, and their screen and audio are recorded as they work through the scenario. Unmoderated usability testing methods, such as remote task completion and screen recording, are valuable for large-scale studies where researchers need to observe user behavior across many participants efficiently.
This method trades conversational depth for scale and speed. You can test with 100 participants in the time it takes to do 5 moderated sessions. You get results within hours instead of weeks. Without scheduling coordination, you can launch tests late in the day and have data by morning.
Dropbox uses unmoderated testing constantly for feature validation. When updating their file sharing permissions interface, they ran tests with 150 users over two days. They found 23% of users couldn’t set expiration dates on shared links, leading to a redesign before launch. Unmoderated tests provide valuable quantitative data, allowing teams to validate design decisions with statistically significant results.
With moderated testing, they might have caught this with 8-10 participants, but it would have taken two weeks of scheduling and facilitation. Unmoderated testing gave statistically meaningful data within 48 hours.
Unmoderated testing excels when you have specific tasks to validate, need quantitative data about success rates, or are testing many user segments or devices. It’s perfect for questions like “Can users complete this checkout flow?” or “Does this new navigation improve task completion?”
Typical unmoderated tests run 10-20 minutes. You create a scenario with 3-5 tasks, write clear instructions, and include post-task questions for qualitative feedback. Participants complete everything at their own pace, in their environment, on their own devices.
Platform choice matters. UserTesting offers participant panels and screening but costs $50,000-$100,000+ annually. Maze focuses on prototype testing with built-in analytics for $99-$500/month. Lookback supports both moderated and unmoderated testing with strong video quality for $200-$500/month. Choosing the right usability testing tool is crucial for efficient data collection and analysis.
The biggest advantage of unmoderated testing is quick scale. You can test edge cases, compare user segments, and gather enough data to identify patterns missed with small samples. Feedback from unmoderated methods directly informs product improvements and helps prioritize changes. A 5% failure rate matters with 100,000 users but might be missed with 8 moderated participants.
The biggest disadvantage is losing the ability to ask “why.” You see what happened but must infer why. This works for straightforward issues but struggles with complex problems where understanding user mental models matters.
Location, whether testing happens remotely or in-person, fundamentally changes what you can observe and how easily you can conduct research. This decision often matters more than the moderation choice. Remote tests and in-person tests are two distinct approaches, each offering unique advantages and limitations for usability research.
Remote testing means you and the participant are in different physical locations, connected through screen sharing and video conferencing. The participant uses their own device in their own environment. You see their screen and hear their voice, but you are not physically present.
In-person testing means you are in the same room, either at your office, their workplace, or a neutral location like a usability lab. You can see their full body language, observe their environment, and interact more naturally. Analyzing body language during in-person tests provides valuable qualitative data, as you can observe facial expressions, posture, and subtle nonverbal cues that may be missed remotely.
Remote testing grew before 2020 and accelerated during the pandemic. By 2024, it became the default for most product teams. The UserTesting State of User Research report found that 71% of research studies now happen remotely, up from 34% in 2019.
This shift happened because teams found remote testing often produces better insights. Participants use their actual devices in their actual environments with their actual internet connections. A designer testing their app on a new MacBook Pro with fiber internet misses how most users experience it on a 4-year-old phone with spotty WiFi.
Remote testing also expands your participant pool. Instead of recruiting within driving distance of your office, you can recruit globally. If you are building a tool for enterprise sales managers, you can test with actual managers at Microsoft, Oracle, and Salesforce rather than settling for whoever lives in your city.
The practical advantages add up: no need to book facilities, no travel time for participants, easier recording and sharing of sessions, and significantly lower cost per participant. Remote sessions typically cost $50-$150 per participant including recruiting, while in-person sessions run $200-$500 or more when you factor in facility rental, travel reimbursement, and time costs.
However, in-person testing still has valid use cases. When testing physical products that participants cannot have at home, in-person is required. When you need to observe subtle body language such as microexpressions, posture changes, or hand movements, video compression and camera angles make remote observation insufficient.
Some product categories benefit specifically from in-person testing. Medical device companies testing surgical equipment need to see how doctors manipulate tools. These specialized in-person tests often use a controlled environment and dedicated testing lab to ensure standardized conditions and minimize distractions. Automotive companies testing dashboard interfaces need to observe driver attention and reach distances. Gaming companies testing VR experiences need to observe physical space usage and movement.
The decision framework is simple: default to remote testing unless you have a specific reason to be in-person. Those reasons include testing physical products, when detailed body language observation matters critically, testing in specialized environments you cannot access remotely, or recruiting a population that cannot or will not participate remotely.
Understanding the four combinations of moderation and location gives you a decision framework for choosing the right approach. Each combination has distinct strengths, ideal use cases, and cost profiles. In every testing session, participants are asked to complete specific tasks that reflect real-world use of your product, allowing you to observe their behavior, identify usability issues, and gather actionable insights.
Remote moderated testing involves live facilitation via video call, allowing researchers to observe participants using their own devices in their natural environment and ask real-time questions. This method balances deep qualitative insights with practical efficiency.
Notion used this approach to understand international user behaviors without traveling. It’s ideal for exploring new features, complex workflows, distributed users, and real-world contexts.
Typical timeline: 2-3 weeks with 8-10 participants. Cost: $2,000-$5,000, lower if recruiting existing users.
Participants complete tasks independently on their devices without a facilitator, with sessions recorded automatically. This method offers fast, large-scale quantitative data.
Spotify leveraged remote unmoderated tests with 200 users globally to identify regional usability issues quickly.
Best for validating task flows, comparing user segments, and rapid iteration.
Timeline: 2-5 days with 50-100 participants. Cost: $500-$2,000 plus platform fees.
Facilitated sessions in person enable observation of full body language and interaction with physical products, providing rich data.
Tesla uses this for vehicle interface design to capture nuanced user behaviors not possible remotely.
Best for physical products, specialized environments, and early exploratory research.
Timeline: 3-4 weeks with 8-10 participants. Cost: $5,000-$15,000.
Participants complete tasks independently in a controlled setting without a facilitator. Mostly used for large-scale academic or conference studies.
Generally, if testing in person, moderation is preferred; if unmoderated, remote is more practical.
Selecting the right usability testing method depends on your research goals, product stage, timeline, and budget. Align your choice with these factors to ensure effective user feedback throughout your design and development process.
Choose moderated testing for questions about why users struggle or what they expect, as it allows follow-up and conversation. Use unmoderated testing for questions about task completion rates or design performance, focusing on observation and measurement. For understanding real workflows or environmental impacts, opt for remote testing.
Use moderated testing in early stages to explore concepts and user mental models. For later stages, use unmoderated testing and multivariate tests to validate refinements and compare design variations.
Unmoderated testing suits tight timelines (under a week) due to faster setup and results. Moderated testing requires more time (3-4 weeks) for recruiting and analysis. Longer timelines allow combining both methods for depth and validation.
Expect costs roughly as follows: remote unmoderated testing with 50 participants ($500-$2,000), remote moderated testing with 8 participants ($2,000-$5,000), and in-person moderated testing with 8 participants ($5,000-$15,000). Include additional team time beyond direct research costs.
New teams should start with moderated testing to build skills. Experienced teams can leverage unmoderated testing effectively by crafting clear tasks and interpreting results without real-time interaction.
Avoid running unmoderated tests prematurely, as ambiguous tasks can lead to confusing results. Moderated testing helps clarify and correct during sessions.
Even with the right method, execution errors can undermine your findings. These mistakes occur across all testing types and other qualitative research like focus groups, where moderator bias or poor questions affect insights.
In moderated testing, avoid leading questions like "Is this button confusing?" Instead, observe behavior and ask neutral questions such as "What are you trying to do?" In unmoderated testing, task phrasing matters. Use scenarios that prompt natural behavior rather than direct instructions.
Testing with the wrong users such as college students for enterprise software yields irrelevant results. Use screening to recruit participants matching your target audience’s behavior and context.
Tasks should reflect real user goals, not teach features. For example, use scenarios like "Find an affordable phone charger" instead of specific instructions. Pilot tests help catch ambiguous tasks before full launch.
Remote testing captures real environments including distractions, which provide valuable usage insights. However, extreme technical issues should disqualify sessions.
Don’t rely solely on task completion rates. Watch for frustration, delays, errors, hesitation, and workarounds to fully understand user behavior and uncover usability issues.
How many participants do you need for usability testing?
For moderated testing, 5-8 participants per user segment usually identify 80-90% of usability issues. For unmoderated testing with quantitative goals, aim for 30-50 participants to have statistical confidence. More participants don’t always mean better insights. Depth matters more than sample size.
What’s the difference between moderated and unmoderated usability testing?
Moderated testing involves a researcher facilitating the session, asking questions, and probing behavior in real-time. Unmoderated testing means participants complete tasks independently without a researcher. Use moderated testing to understand why users struggle. Use unmoderated testing to validate what percentage can complete tasks successfully.
Should you use remote or in-person usability testing?
Choose remote testing unless you have specific reasons for in-person. Remote testing captures natural device and environment context, expands your participant pool geographically, costs less, and avoids scheduling complexity. Use in-person only for physical products, when body language observation is critical, or when testing in specialized environments.
How long should usability testing sessions be?
Moderated sessions usually last 45-60 minutes. Longer sessions cause fatigue and loss of attention. Unmoderated sessions should be 10-20 minutes maximum. Longer unmoderated tests lead to participant drop-off and biased samples.
How much does usability testing cost?
Remote unmoderated testing costs $500-$2,000 for 50 participants. Remote moderated testing costs $2,000-$5,000 for 8 participants. In-person moderated testing costs $5,000-$15,000 for 8 participants. Costs include recruiting, incentives, platform fees, and researcher time. Recruiting from existing users can reduce costs. Usability testing platforms offer scalable solutions for various budgets.
When should you do usability testing?
Test early and often. Guerrilla testing provides quick informal early-stage feedback. Test prototypes before building to validate concepts. Test beta versions before launch to catch critical issues. Test existing features quarterly to identify emerging problems. Catching problems early yields 10-100x ROI.
Can you do usability testing with competitors’ products?
Yes. Testing competitor products reveals strengths and weaknesses, helps understand industry standards users expect, and identifies differentiation opportunities. Recruit users of competitor products to learn their mental models and expectations.
What’s the difference between usability testing and user interviews?
Usability testing observes behavior, what users do when completing tasks. User interviews explore attitudes and opinions, what users think and say about needs and preferences. Behavior shows what happens. Interviews reveal motivations. Strong research uses both methods for full understanding.
Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.
Book a demoJoin paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.
Sign up as an expert