User Research

How to calculate research sample size: A practical guide for user and market research

The right sample size depends on your method, your precision needs, and the stakes of the decision. This guide provides formulas, rules of thumb, and method-specific tables for every common research context.

CleverX Team ·
How to calculate research sample size: A practical guide for user and market research

“How many participants do I need?” is the most common question in research planning, and the answer is never a single number.

The right sample size depends on whether you are running qualitative or quantitative research, what decisions the findings will inform, how much precision you need, and how many distinct user segments you are studying. Five participants can be enough for a usability test. The same question asked as a survey might require 400.

Getting sample size wrong in either direction costs you. Too few participants produce findings that do not generalize and that stakeholders rightfully question. Too many waste budget and time on precision you do not need for the decision at hand.

This guide provides practical formulas, rules of thumb, and method-specific guidance for calculating sample sizes across the most common user and market research contexts.

Key takeaways

  • Qualitative and quantitative research follow completely different sample size logic. Qualitative targets thematic saturation. Quantitative targets statistical precision.
  • 5 participants for usability testing is a valid guideline, but only for formative testing with a single homogeneous user segment
  • The standard formula for survey sample size (385 respondents for 95% confidence, plus/minus 5% margin) applies to general population surveys. Smaller populations and less precision need fewer respondents.
  • For A/B testing, sample size depends on baseline conversion rate, minimum detectable effect, and statistical power. Use an online calculator rather than guessing.
  • Always recruit 20-30% more than your target to account for no-shows, dropouts, and disqualifications
  • Match your sample size to the stakes of the decision. Quick directional checks need far fewer participants than studies informing multi-million dollar product investments.

How do you calculate sample size for qualitative research?

Qualitative research does not follow statistical sample size formulas. The goal is not statistical representativeness but thematic saturation: the point where additional participants stop producing new insights.

Usability testing: the 5-participant guideline

Jakob Nielsen’s research showed that 5 participants identify approximately 80% of usability problems in a design. This guideline is widely adopted for good reason, but it applies specifically to:

  • Formative usability testing (finding problems, not measuring rates)
  • Single segment of homogeneous users
  • Iterative testing where you test, fix, and test again

When to use fewer than 5:

  • Pilot sessions to validate your research protocol
  • Quick checks on a specific design question (“Can users find the settings page?”)
  • Expert reviews where the question is “does any user fail?” not “what percentage fail?”

When to use more than 5:

  • Multiple user segments (5-8 per segment)
  • Studies where stakeholders need larger samples to trust the findings
  • Tasks with high variability where different users take fundamentally different approaches

For a deeper dive into usability testing approaches, see our complete guide.

User interviews: 15-25 for saturation

For interview-based research, 15-25 participants is the typical range for reaching thematic saturation with a single segment.

Interview scopeRecommended sampleNotes
Single segment, focused topic12-15Saturation usually reached by interview 12
Single segment, broad exploration15-20More variability requires more participants
Two distinct segments10-15 per segment20-30 total
Three or more segments8-12 per segmentPrioritize depth per segment
Niche professional audience8-12Hard-to-recruit experts still produce rich data

Signs you have reached saturation:

  • The last 3-4 interviews produce no themes absent from earlier sessions
  • You can predict what themes will appear before analyzing a new session
  • New participants provide additional examples of known themes rather than entirely new ones

Focus groups: 2-4 groups minimum

Most research programs run 2-4 focus groups per research question. A single group carries the risk that group dynamics were atypical. Two groups provide cross-validation. Additional groups are needed for:

  • Distinct audience segments (at least one group per segment)
  • Geographic diversity
  • High-stakes decisions where stakeholder confidence requires more data

Each group typically includes 6-10 participants.

How do you calculate sample size for quantitative research?

Quantitative sample sizes are calculated based on statistical principles: confidence level, margin of error, and expected variability.

The standard survey sample size formula

For proportional data (percentages), the formula is:

n = (Z^2 x p x (1-p)) / e^2

Where:

  • n = required sample size
  • Z = Z-score for desired confidence level (1.96 for 95%, 2.58 for 99%)
  • p = estimated proportion (use 0.5 if unknown, which maximizes required sample)
  • e = desired margin of error (0.05 for plus/minus 5%)

Example calculation:

For 95% confidence and plus/minus 5% margin of error:

n = (1.96^2 x 0.5 x 0.5) / 0.05^2 n = (3.84 x 0.25) / 0.0025 n = 384 respondents

This is where the familiar “385 responses” minimum comes from. At this sample size, if 62% of respondents select an option, you can say the true population value is between 57% and 67% with 95% confidence.

Adjusting for smaller populations

The formula above assumes a very large population. For smaller populations (under 10,000), apply the finite population correction:

n_adjusted = n / (1 + (n-1) / N)

Where N is the total population.

Population sizeUnadjusted sample (385)Adjusted sample needed
100,000+385384 (negligible difference)
10,000385370
5,000385357
1,000385278
500385218
200385131

Smaller populations require proportionally fewer respondents to achieve the same precision.

Quick reference: confidence level and margin of error combinations

Confidence levelMargin of errorRequired sample (large population)
90%plus/minus 10%68
90%plus/minus 5%271
95%plus/minus 10%97
95%plus/minus 5%385
95%plus/minus 3%1,068
99%plus/minus 5%666

For most product research decisions, 95% confidence with plus/minus 5-10% margin is sufficient. Academic research and regulatory studies may require tighter precision.

For guidance on designing the survey itself, see our survey design guide.

What sample sizes do specific research methods need?

Different methods have different sample requirements based on what they measure and how they analyze data.

Research methodTypical sampleTypeNotes
Formative usability testing5-8 per segmentQualitativeProblem identification, not rates
Summative usability testing30-50QuantitativeTask completion rate estimation
Tree testing50-100QuantitativeReliable completion rate data
Card sorting (unmoderated)20-50MixedCo-occurrence matrix reliability
First-click testing50-100QuantitativeReliable heatmap density
Preference testing30-50QuantitativeDirectional confidence on design preference
General population survey385+Quantitativeplus/minus 5%, 95% confidence
Segmentation research200-400 per segmentQuantitativeSubgroup analysis requires segment-level samples
NPS tracking100-200QuantitativeQuarterly benchmark reliability
Diary studies12-20QualitativeAccount for 20-30% dropout
A/B testing1,000-10,000+ per variantQuantitativeDepends on baseline rate and MDE
Concept testing100-200QuantitativeDirectional validation

A/B testing sample sizes

A/B testing sample sizes depend on four variables:

  • Baseline conversion rate of the control (lower baselines need larger samples)
  • Minimum detectable effect (MDE) or the smallest improvement worth detecting
  • Statistical power (typically 80%, meaning 80% chance of detecting a real effect)
  • Significance level (typically 0.05, meaning 5% false positive rate)

Practical guideline: detecting a 10% relative improvement on a 3% baseline conversion rate requires approximately 15,000-20,000 sessions per variant. Detecting a 20% relative improvement on the same baseline needs roughly 4,000-5,000 per variant.

Use an online calculator (Evan Miller, Statsig, or Optimizely) rather than calculating by hand. These tools account for the interaction between variables.

Task completion rate studies

If the goal is estimating what percentage of users can complete a task (not just finding problems):

Precision goalSample neededUse case
Above/below 80% threshold (90% confidence)~30Quick benchmark
Completion rate within plus/minus 10%~65Moderate precision
Completion rate within plus/minus 5%~250High precision
Comparing two groupsDouble the aboveBetween-group comparison

What are the most common sample size mistakes?

Applying quantitative logic to qualitative research

“We interviewed 8 users and 5 said X, so 63% of users feel this way.” This is not a valid statistical claim. Qualitative research identifies themes and patterns. It does not produce reliable percentages. Report qualitative findings as themes with supporting quotes, not as percentages.

Ignoring attrition

For diary studies, longitudinal research, and multi-session studies, participants drop out. Recruit 20-30% more than your target completion sample. If you need 15 completed diary studies, recruit 20. If you need 200 survey completions, send to at least 260 (assuming ~75% completion rate). For guidance on recruiting participants, see our methods guide.

Confusing sample size with sample quality

400 responses from a biased panel are worse than 100 responses from a well-screened, representative sample. Response bias from self-selection, leading questions, or non-representative sourcing undermines findings regardless of sample size.

Over-powering low-stakes research

Not every research question needs 385 respondents. For an early-stage design decision between two concepts, 20-30 responses may be sufficient to identify a clear preference. Match your sample size to the stakes and reversibility of the decision.

Under-powering segment analysis

If you plan to compare results across 3 customer segments, you need sufficient sample within each segment, not just in total. 300 total responses split unevenly across segments (200 from one, 60 from another, 40 from the third) does not support reliable comparison.

Forgetting the segment multiplier

Every additional segment you want to analyze separately multiplies your total sample requirement. Two segments need roughly double. Three segments need triple. Plan recruitment and budget accordingly.

How do you decide the right sample size for your study?

Use this decision framework:

Step 1: Identify whether your research is qualitative or quantitative.

  • If qualitative (interviews, usability testing, observation): follow saturation guidelines (5-25 per segment depending on method)
  • If quantitative (surveys, A/B tests, benchmarks): use the statistical formula or an online calculator

Step 2: Count your segments.

Multiply the per-segment requirement by the number of distinct segments you need to analyze separately.

Step 3: Match precision to stakes.

Decision stakesPrecision neededSample approach
Exploratory, reversibleDirectionalLean samples (5-8 qual, 50-100 quant)
Moderate investmentReasonable confidenceStandard samples (15-20 qual, 200-400 quant)
High investment, hard to reverseHigh confidenceRobust samples (20-30 qual, 400+ quant)
Regulatory or public reportingMaximum precisionLarge samples with documented methodology

Step 4: Add an attrition buffer.

Add 20-30% to your target for any method where participants can drop out, fail screening, or produce unusable data.

Step 5: Validate feasibility.

Can you actually recruit this many qualified participants within your timeline and budget? If not, reduce precision requirements, narrow your segments, or adjust the research method. A mixed methods approach that combines a smaller quantitative sample with qualitative depth often delivers more actionable insights than a large but shallow survey.

Sample size calculation checklist

Before calculating

  • Define whether the study is qualitative, quantitative, or mixed methods
  • Identify the number of distinct segments to analyze
  • Determine what precision level the decision requires
  • Choose the confidence level and margin of error for quantitative studies

During calculation

  • Use the correct formula or guideline for your research method
  • Apply finite population correction if your population is under 10,000
  • Multiply per-segment requirements by the number of segments
  • Add 20-30% attrition buffer to the calculated sample

Before recruiting

  • Verify that the calculated sample is feasible within budget and timeline
  • Confirm that recruitment sources can deliver qualified participants at the needed volume
  • Document your sample size rationale for stakeholder communication
  • Plan for how you will handle under-enrollment if recruitment falls short

Frequently asked questions

Why does everyone say you need 5 participants for usability testing?

Jakob Nielsen’s research found that 5 users identify roughly 80% of usability issues in formative testing with a homogeneous group. It applies to problem discovery, not rate measurement. For task completion rates, multiple segments, or summative benchmarking, you need more.

Is 100 survey responses enough?

It depends on what you need. 100 responses gives you plus/minus 10% margin at 95% confidence for the total sample. That is fine for directional insights with a single audience. It is not enough for segment-level comparison or precise percentage estimates. If you need to compare three user segments, you need 100+ per segment.

How do you calculate sample size for A/B tests?

Use an online calculator (Evan Miller is the most popular). Input your baseline conversion rate, the minimum effect you want to detect, 80% power, and 95% significance. The calculator outputs the required sample per variant. Do not guess. Small changes in these inputs produce dramatically different sample requirements.

What if I cannot recruit enough participants?

Adjust your approach. Reduce the number of segments, accept wider margins of error, switch from quantitative to qualitative research methods, or use a mixed methods design that combines a smaller survey with in-depth interviews. 12 well-screened qualitative interviews often produce more actionable insights than an underpowered 80-response survey.

Does sample size differ for B2B vs. B2C research?

The statistical principles are identical. The practical difference is recruitment difficulty. B2B audiences (enterprise buyers, niche professionals) are harder to recruit, which often limits feasible sample sizes. Compensate by accepting wider margins of error for quantitative B2B research or leaning more heavily on qualitative methods where 8-15 expert participants produce rich, actionable findings.