Seven criteria for research questions: open-ended, neutral, behavioral, singular, specific, grounded, exploratory, to elicit actionable user insights.

Design multiple-choice questions that are clear, exhaustive, and unbiased so your survey yields dependable, actionable data and not misleading noise.
Multiple choice questions let you collect structured data you can actually analyze. Unlike open-ended questions requiring manual coding, multiple choice produces numbers, percentages, and patterns you can act on. Multiple choice questions are a primary tool for collecting quantitative data in market research, leading to valuable insights that inform business decisions.
But bad multiple choice questions produce data that’s worse than useless. They create false confidence in wrong conclusions. You think you learned something, make decisions based on it, then discover users didn’t understand your question or your options didn’t capture their reality.
Spotify once asked users “How do you discover new music?” with options including Discover Weekly, Radio, Browse, and Search. They forgot to include “Friend recommendations” and “Social media.” 47% selected “Other” and wrote in social sources. The survey design missed one of the most important discovery methods.
Good multiple choice questions:
Cover all reasonable answers users might have
Use clear, unambiguous language
Avoid bias and leading options
Produce data that informs decisions
Don’t frustrate users with missing options
Are essential for conducting a good survey that yields reliable, actionable results.
Well-designed multiple choice questions also make the process easier and more straightforward for survey takers, improving completion rates and the quality of the data collected.
The first design decision is whether users can select one answer or many. Single answer questions, also known as single answer or single answer questions, allow respondents to select only one answer from the available options—these are used when only one answer is appropriate, such as in binary (yes/no), rating, or demographic questions. These are examples of binary answer questions, which require a clear yes/no or thumbs up/down response from participants. In contrast, multiple answer questions, also referred to as multi select questions or multi-select questions, enable respondents to choose multiple answers from a set of options. Multiple answer questions are used when more than one answer may apply, providing greater flexibility and capturing more nuanced responses.
Users pick only one option. Selecting another automatically deselects the first. This is an example of a single answer question, where respondents are limited to only one answer in a .
Use when:
Options are mutually exclusive (also known as mutually exclusive choices)
You need to identify the primary or most important thing
You want forced prioritization
Example: “What’s your primary role?”
Product Manager
Designer
Engineer
Marketing
Other
You can only have one primary role.
Users can select as many options as apply. This format is known as a multiple answer question or multi select question, allowing respondents to choose multiple answers from the provided options.
Use when: You need more resources or information on market research.
Users legitimately use/want/do multiple things
You need comprehensive answers
Options aren’t mutually exclusive
Example: “Which features do you use regularly?” (select all that apply)
Playlists
Podcasts
Radio
Discover Weekly
Friend Activity
Most Spotify users use several features.
Using single response when multiple applies frustrates users. “I use three of these features but can only pick one?”
Using multiple response when you need priorities produces data you can’t prioritize. “Everyone selected everything. What actually matters?”
Notion learned this testing a feature priority question. First version used checkboxes (multiple response). Result: users checked 6-8 features on average. Data was useless for prioritization. Second version: “Select all you use” (checkboxes) followed by “Which ONE is most essential?” (radio button). This revealed both breadth and depth. Splitting compound or double-barreled questions into two separate questions, or using separate questions for each aspect, helps avoid confusion and ensures accurate data collection; see more survey design resources.
Your answer options need to cover what users will actually think. Listing all possible answers and providing clear answer choices ensures that respondents are not forced to skip questions or select overlapping responses, which enhances the reliability and accuracy of your survey data. Selecting correct options for each question also improves data accuracy and helps avoid ambiguity in responses.
Include all common answers users might have. Missing obvious options forces users into wrong answers.
Bad: “How often do you use the mobile app?” (For proven techniques to improve your survey questions, see the Survey Optimization Guide: Design Strategy 2024.)
Daily
Weekly
Monthly
Problem: No “Never” or “Rarely” option. Non-users can’t answer honestly. Failing to include all possible answers or comprehensive answer choices can lead to inaccurate data.
Good: “How often do you use the mobile app?”
Multiple times per day
Daily
Several times per week
Weekly
Monthly
Rarely
Never
Even comprehensive lists miss something. "Other" lets users indicate their answer isn't listed.
Best practice: "Other (please specify): ___________"
The text field lets users explain what you missed. High "Other" percentages signal you missed important options.
Calendly surveyed meeting types with options like Sales, Support, Recruiting, Internal. They got 34% "Other" with write-ins mostly saying "Consulting" and "Coaching." Next survey added those options.
Single response options shouldn’t overlap. Users can’t pick between overlapping categories. Using mutually exclusive choices ensures each option is distinct and can only be selected individually.
Bad: “How much would you pay?”
$0-10
$10-20
$20-50
Problem: Where do you select if you’d pay exactly $10?
Good: “How much would you pay?”
$0-9
$10-19
$20-49
$50+
Or use clear boundaries: “Less than $10 / $10-19 / $20-49 / $50 or more”
Sometimes users don't do/use/want any listed options. "None of the above" gives them an accurate answer.
"Which premium features interest you?"
Feature A
Feature B
Feature C
None of these interest me
Without "None," users who aren't interested either skip the question or randomly pick something.
Too few options (2-3) oversimplifies and forces users into categories that don't fit.
Too many options (15+) overwhelms users. They miss their actual answer while scrolling or give up.
Sweet spot: 4-8 options for most questions. Use up to 10-12 when necessary, but consider grouping into categories at that point.
Dropbox tested satisfaction questions with 3, 5, 7, and 9-point scales. Five and seven points produced equivalent data quality, but five-point scales had 12% faster completion. They standardized on five points for most questions.
How you phrase options affects whether users understand them. When writing questions, it is essential to use clear language and familiar language to ensure respondents easily understand and interpret the questions correctly.
All options should follow the same grammatical pattern.
Bad: "What's your biggest challenge?"
Finding information
It's hard to collaborate
The mobile app
Not enough training resources
Good: "What's your biggest challenge?"
Finding information
Collaborating with team
Using the mobile app
Accessing training resources
Parallel structure makes options easier to scan and compare.
Long options are hard to read and compare.
Bad: "Which of the following describes your experience with our customer support team when you contacted them about technical issues?"
I had a very positive experience and my issue was resolved quickly and professionally
My experience was somewhat positive though it took longer than expected
[continues...]
Good: "How would you rate your customer support experience?"
Excellent
Good
Fair
Poor
Very poor
Then ask follow-up questions for details if needed.
Write options in language users actually speak.
Bad (for consumer research): "Which authentication method do you prefer?"
OAuth
SAML
Two-factor authentication
Biometric
Good: "How do you prefer to log in?"
Email and password
Google/Facebook login
Text message code
Fingerprint/Face ID
Exception: Technical jargon is fine when surveying technical audiences who use those terms.
Linear surveys developers and uses technical language freely ("How often do you use GraphQL API?"). That's appropriate for their audience.
Vague options mean different things to different people.
Bad: "How satisfied are you with the product?"
Very satisfied
Satisfied
Somewhat satisfied
Not very satisfied
Not at all satisfied
Problems: "Somewhat satisfied" and "Not very satisfied" are vague. What's the difference?
Good:
Very satisfied
Satisfied
Neither satisfied nor dissatisfied
Dissatisfied
Very dissatisfied
These categories have clearer boundaries.
The order you present options affects responses. Dropdown questions and ranking questions are alternative formats where the order of answer choices and subsequent questions can influence how respondents answer.
Always present scales in consistent order, never random.
For agreement scales: Strongly disagree → Strongly agree
For satisfaction scales: Very dissatisfied → Very satisfied
For frequency scales: Never → Always (or reverse: Always → Never)
Pick an order and stay consistent throughout your survey. Switching confuses users.
Chronological order: Time periods go from past to present or present to past consistently.
Natural order: Company sizes go small to large. Experience goes beginner to expert.
Alphabetical order: When options are equal and unordered (countries, product categories).
Most common first: Reduces scrolling for common answers. Good for dropdown menus.
For unordered options where you want to avoid order bias, randomize the sequence.
Example: When asking users to rate multiple features, randomize the feature order so each feature gets equal exposure to "first in list" advantage.
Figma randomizes feature lists in satisfaction surveys. They found the first three features always rated slightly higher due to response fatigue. Randomization equalizes this effect.
How you phrase options can push users toward specific answers. Using certain words in answer choices can introduce response bias and compromise data quality.
Don't provide more positive than negative options, or vice versa.
Biased: "How would you rate the new feature?"
Excellent
Very good
Good
Fair
Problem: Three positive options, one neutral. No negative options. This pushes responses positive.
Balanced:
Excellent
Good
Fair
Poor
Very poor
Equal positive and negative options with a neutral middle.
Don't use emotionally charged words in options.
Biased: "How do you feel about the new pricing?"
I love the fair and reasonable pricing
It's acceptable
It's too expensive
Neutral: "How do you feel about the new pricing?"
Below what I expected
About what I expected
Above what I expected
When comparing options, describe them neutrally.
Biased: "Which checkout process do you prefer?"
Simple one-page checkout (recommended)
Traditional multi-step checkout
Neutral: "Which checkout process do you prefer?"
One-page checkout
Multi-step checkout
Effective survey design begins with clearly defined research questions, and each survey question should be crafted to directly address these research questions.
Beyond the options themselves, question design matters. Questionnaire design is a critical skill for survey creators, as creating surveys with a positive survey experience in mind leads to more accurate and relevant data.
The question should be specific and unambiguous. Writing questions clearly is essential for accurate responses and is a key part of effective question writing for surveys, tests, and assessments.
Vague: “How do you feel about it?”
Clear: “How satisfied are you with the mobile app’s performance?”
Don’t ask multiple things in one question.
Bad (double-barreled): “How satisfied are you with the speed and reliability of the product?”
Problem: Users might be satisfied with speed but not reliability. They can’t give one answer for both. Double barreled questions, or such questions that combine unrelated topics, should be avoided to ensure clarity and accuracy.
Good: Split into two questions:
“How satisfied are you with the product’s speed?”
“How satisfied are you with the product’s reliability?”
Sometimes users need context to answer accurately.
Without context: "How often do you use Projects?"
With context: "In the past month, how often have you used the Projects feature?"
The timeframe helps users answer accurately.
Indicate if questions are required. Don't surprise users when they try to submit.
Use "(optional)" or "(required)" labels or visual indicators like asterisks.
When asking the same question about multiple items, use matrix format. Matrix questions are a form of closed ended questions that allow for efficient and structured data collection.
Instead of:
How satisfied are you with Feature A?
How satisfied are you with Feature B?
How satisfied are you with Feature C?
Use matrix: “How satisfied are you with each feature?”
Benefits: Faster completion, easier comparison, less repetitive
Drawbacks: Hard on mobile, can encourage satisficing (picking same answer for all)
Best practice: Limit to 5-7 rows maximum. Randomize row order to prevent order bias.
Notion uses matrix questions for feature satisfaction but limits them to 5 features per matrix. More than that and completion rates drop.
Over 50% of survey responses come from mobile devices. Design for small screens. Making surveys mobile-friendly can be time consuming due to the extra steps involved, but it is essential for achieving high response rates.
Stack options vertically, not horizontally. Horizontal options get cut off or require scrolling.
Good for mobile: ○ Option A
○ Option B
○ Option C
Bad for mobile: ○ Option A ○ Option B ○ Option C
Make radio buttons and checkboxes easy to tap. Minimum 44x44 pixels for touch targets.
Dropdown menus are clunky on mobile. Show options directly when possible. Dropdown questions are best reserved for long lists of options, such as dates or locations, but should be minimized on mobile devices to keep surveys concise and user-friendly.
Mobile-friendly: Radio buttons (all visible)
Mobile-unfriendly: Dropdown menu requiring multiple taps
Matrix questions are nearly impossible to use on small screens. Use individual questions on mobile or skip matrix entirely.
Calendly serves different question formats based on device. Desktop users see matrix questions. Mobile users see individual questions for the same content.
Before sending surveys to users, test them. Testing helps identify when to include opt out options for sensitive or demographic questions and ensures that sensitive topics are handled with care and respect.
It's also important to consider which research methods are best suited for your objectives. For some types of data, alternative research methods like usability testing or A/B testing may be more effective than surveys.
Have 3-5 people complete the survey while thinking aloud. Listen for:
Confusion about what questions ask
Difficulty choosing between options
Missing options forcing "Other"
Questions that feel biased
Review all "Other" responses from test users. If many write the same thing, add it as an option.
During live surveys, watch for:
Questions with high skip rates (confusing or sensitive)
High "Other" percentages (missing options)
Unusual answer distributions (might indicate question problems)
Stripe tests every survey with 10 internal users before external launch. They found this catches 80% of unclear questions and missing options.
Once you have responses, analyze systematically. Multiple choice questions primarily yield quantitative data, which allows for quick identification of trends and easy analysis. To gain deeper insights, complement these with qualitative data from open-ended questions. Common metrics like Net Promoter Score (NPS), which uses a 0-10 rating scale to evaluate customer loyalty and willingness to recommend, are often analyzed alongside multiple choice survey data.
The basic output: what percent selected each option.
"Primary use case?"
Sales meetings: 42%
Customer support: 28%
Recruiting: 18%
Internal meetings: 12%
Compare answers across segments.
"Primary challenge?" by company size:
Small (1-10): Finding information (45%)
Large (50+): Collaborating across teams (52%)
Different segments have different priorities.
For checkbox questions, calculate:
Percentage selecting each option
Average number of selections per user
Common combinations
With sufficient sample sizes (100+ per segment), test whether differences between groups are statistically significant.
Skipping testing means missing unclear questions and missing options. Always test with at least 3-5 people.
Survey fatigue kills response quality. Keep surveys under 10-15 questions when possible.
Notion targets 8-10 questions for most surveys. Completion rate: 76%. When they tested 20-question surveys, completion dropped to 43%.
Group similar questions together. Don't jump between demographics, satisfaction, and feature usage randomly. Logical flow improves completion.
Rating scales should include neutral options. Forcing users to lean positive or negative when they're truly neutral produces bad data.
High "Other" percentages signal problems. Read what people write. It reveals missing options and question confusion.
Multiple choice questions work best as part of well-designed surveys. Well-designed surveys provide respondents with clear, actionable options to ensure accurate data collection.
Multiple choice for structured data
Rating scales for satisfaction and agreement
Open-ended questions for qualitative depth (also known as open ended question)
Matrix questions for efficient comparison
Begin with easy, engaging questions. Don't lead with demographics.
Keep all questions about a topic together before moving to the next topic.
Put demographic questions last. Collecting demographic information at the end helps minimize survey abandonment. They’re boring but important, so put them where abandonment hurts least.
Show users how far through the survey they are. This reduces abandonment.
End with genuine thanks and explain how feedback will be used.
Different tools handle multiple choice differently.
Google Forms: Free, simple, good for basic surveys.
Typeform: Beautiful, conversational interface, good completion rates.
SurveyMonkey: Robust features, professional reporting.
Qualtrics: Enterprise features, advanced logic and analysis.
Sprig: In-product surveys, good for user research.
Survey tools in analytics platforms: Amplitude, Mixpanel, PostHog have built-in surveys.
The best tool depends on your needs, budget, and whether you want standalone surveys or in-product prompts.
Good multiple choice questions disappear. Users answer them quickly without confusion or frustration. A good survey is one where questions are clear, concise, and easy for respondents to answer. Bad questions make users pause, reread, wonder what you’re really asking, or pick answers that don’t quite fit.
Test this yourself: if you have to explain what your question means or defend your answer options, they’re not clear enough.
The goal isn’t impressive surveys. It’s collecting accurate data that helps you build better products. Simple, clear, comprehensive questions do that. Complex, clever, or leading questions don’t.
Ready to design better survey questions? Download our free Survey Design Checklist with question review frameworks, option templates, and testing protocols.
Need help with your survey design? Book a free 30-minute consultation to review your questions and improve response quality.
Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.
Book a demoJoin paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.
Sign up as an expert