Detailed case studies of AI-moderated interview implementations: challenges faced, approaches used, results achieved, and key lessons. Actionable insights for your research strategy.

Master card sorting to build intuitive navigation and information architecture. This practical tutorial covers open and closed card sorting methods, analysis techniques, and examples.
This is what card sorting reveals: the gap between how you organize information and how users expect to find it. Card sorting helps you gain insight into how users categorize information and how users perceive your navigation structure, uncovering differences in mental models and expectations.
Card sorting is a research method where participants organize topics into categories that make sense to them. You write each piece of content, feature, or function on a separate card. Participants sort these cards into groups and name those groups. In open card sorting, users define their own categories and labels, which helps reveal how they perceive and organize different concepts within your product.
The method works because it externalizes mental models. Instead of asking users “How would you organize these features?” (which produces vague answers), you watch them physically organize cards. Their grouping choices reveal how they naturally conceptualize relationships between items and highlight different mental models that users may have.
Card sorting excels at three specific scenarios. First, building new navigation for products with substantial content or features, such as when creating a new website or app. Second, restructuring existing navigation that user testing reveals is confusing. Third, understanding how different user segments conceptualize your product differently. Different mental models among user groups can lead to different categorizations.
Spotify used card sorting when expanding beyond music streaming into podcasts and audiobooks. They needed to understand whether users thought of these as separate products requiring different navigation or integrated content types that should flow together. Card sorting with 80 users revealed that users grouped music and podcasts together by mood and activity, but treated audiobooks as distinctly separate long-form content requiring different navigation patterns.
Don’t use card sorting for small-scale decisions. If you’re deciding between two labels for one button, run preference tests or A/B tests instead. Card sorting shines when organizing 20+ items into coherent structure.
Card sorting is a valuable step in the development process for structuring navigation and content.
Card sorting is a powerful user research method that delivers a range of benefits for anyone looking to improve their website or app’s information architecture. By running a card sorting session, you gain valuable insights into users’ mental models—how they naturally organize and categorize information. This understanding is crucial for designing a menu structure that feels intuitive and helps users find what they need quickly.
Card sorting can be adapted to fit a variety of research contexts, thanks to its flexible formats. Choosing the right format depends on your research goals, available resources, and the needs of your target audience.
In-person card sorting: This traditional approach uses physical index cards and is ideal for sessions where you want to observe participants directly, ask follow-up questions, or work with user groups who may have accessibility needs (such as children or those less comfortable with technology).
Remote card sorting: Leveraging online card sorting tools, remote sessions allow you to reach a geographically diverse audience and gather data efficiently. This format is especially useful for B2B research or when your target users are spread across different locations.
Paper card sorting: The classic method involves participants sorting printed cards on a table. While simple and tactile, it’s best suited for small groups or in-person workshops.
Digital card sorting: Using web-based tools, digital card sorting simulates the drag-and-drop experience online, making it easy to collect and analyze results at scale. Online card sorting tools streamline the process and are now the standard for most research teams.
Hybrid card sorting: This format combines elements of open and closed card sorting, allowing participants to use predefined categories and also create new categories as needed. Hybrid card sorting is particularly effective when you want to validate existing categories while remaining open to new ideas.
Selecting the right card sorting format ensures you gather the most relevant data from your target audience, whether you’re working with physical cards in person or running large-scale studies with digital tools.
Card sorting comes in two varieties that serve different purposes: open and closed. Understanding when to use each is critical because they produce fundamentally different insights.
Open card sorting means participants create their own categories. You give them cards to sort but don’t provide category names. They group related cards together and name each group themselves. This reveals how users naturally conceptualize your content without your assumptions influencing them. Open card sorts are especially useful for generating new categories and exploring user-defined groupings that may not be present in your current structure.
Use open card sorting when building new information architecture from scratch or when you suspect your current categories don’t match user mental models. Dropbox used open card sorting when evolving from file storage into collaborative workspace. They needed fresh understanding of how users conceptualized files, folders, sharing, and collaboration without being constrained by existing “Storage” and “Sharing” categories.
Closed card sorting means you provide predefined categories. Participants sort cards into categories you’ve specified. This tests whether your proposed structure makes sense to users and reveals which items feel ambiguous or misplaced. Closed card sorts and the closed card sort method are ideal for validating an existing structure or existing IA, ensuring that your current organization aligns with user expectations.
Use closed card sorting to validate a structure you’ve already designed or to compare alternative organizational schemes. After Dropbox ran open card sorting and designed new categories, they used closed card sorting with different participants to validate that their new structure actually improved findability.
Hybrid card sorting combines both: participants sort into predefined categories but can create new ones if needed. This balances structure with flexibility. Use this when you’re confident about some categories but uncertain about others. Hybrid card sorting is effective for building on existing ideas and refining sub-categories within your information architecture.
Most teams run open card sorting first to understand mental models, then closed card sorting to validate proposed structures. This two-phase approach grounds your final information architecture in user research at both discovery and validation stages. Careful analysis of results is essential to avoid misleading categories that could confuse users and hinder navigation.
Hybrid card sorting is a flexible approach that blends the strengths of open and closed card sorting. In a hybrid card sorting session, participants are given a set of predefined categories but also have the freedom to create new categories if the existing ones don’t fit their mental model. This method is especially valuable when you want to validate your current information architecture while remaining open to fresh perspectives from users.
Start by listing everything that needs organizing. For navigation design, this means every page, feature, and major function. For content organization, this means every content type or topic. Be comprehensive—missing items produces incomplete results.
Aim for 30-60 cards. Fewer than 30 doesn’t provide enough complexity to reveal interesting patterns. More than 60 overwhelms participants and reduces result quality. If you have more items, group similar ones together or run separate studies for different sections.
Write clear, jargon-free labels. “Customer Database” is clearer than “CRM.” “Send Email Campaign” is clearer than “Campaign Deployment.” Ambiguous or duplicate labels can cause participants to automatically group items based on terminology rather than meaning, which may obscure deeper insights. Ambiguous labels produce ambiguous results.
Decide between open, closed, or hybrid based on your goals. Then choose between physical and digital.
Physical card sorting uses actual index cards on a table. Best for in-person sessions where you want to observe participant thinking and ask follow-up questions. Cheap and tactile but limited to in-person participants.
Digital card sorting uses online tools like Optimal Workshop, UserZoom, or Maze. Online card sort platforms allow remote participation, broader reach, and make data analysis much easier. Using a card sorting tool streamlines data collection and automates analysis, providing faster and more accurate results compared to manual methods. Costs $100-$300/month but saves time on synthesis.
Most teams now use digital tools for the scale and analysis benefits unless you specifically need in-person observation.
When recruiting participants for your card sorting study, it's important to determine how many users you need based on your research goals. For open card sorting, you typically need 15-20 study participants to identify clear patterns, while closed card sorting requires 20-30 participants for statistical confidence. If you want more robust, quantitative data, consider recruiting 30 or more users. Always recruit research participants from your actual user base or target audience—don’t use colleagues or family.
Selecting study participants who accurately represent your target audience ensures your findings reflect real user expectations and behaviors. Segment by user type if different audiences use your product differently. Enterprise users might organize features differently than small business users. Run separate card sorts for distinct segments.
For digital card sorting, send participants a link. They complete it independently on their own time. Most take 15-25 minutes. This approach is known as unmoderated card sorting, where participants work without a researcher present, making it efficient and cost-effective for validating information architectures. Alternatively, moderated card sorting involves a researcher guiding or observing the session, allowing for deeper qualitative insights into user reasoning and decision-making, especially useful for complex designs.
For in-person sessions, explain the task: “Organize these cards into groups that make sense to you. Group related items together.” For open sorting, add: “Once grouped, name each category.” For closed sorting: “Sort cards into these existing categories.”
Encourage thinking aloud if in-person. Their reasoning often matters as much as their final groupings. Stay silent and observe rather than guiding or commenting.
Digital tools provide automatic analysis: dendrograms showing clustering patterns, similarity matrices showing which items participants grouped together most often, and category frequency data. Using in-depth analysis techniques, such as similarity matrices and clustering, helps interpret results and uncover patterns in how users categorize information.
Look for strong agreement (80%+ of participants grouped certain items together) and disagreement (items placed inconsistently across participants). Strong agreement validates groupings, especially when items are frequently placed in the same groups by participants. Strong disagreement reveals ambiguous items needing clearer labeling or alternative placement. Analyzing qualitative data from participants' choices and comments helps you understand the reasoning behind their decisions.
For open card sorting, analyze category names participants created. Reviewing how participants label their groups is important, as it provides insight into their terminology and mental models. Similar naming across participants validates category concepts. Varied naming suggests the category grouping works but you need better labeling.
Use card sorting results to inform, not dictate, your final structure. Participants reveal how they think but can't design optimal navigation—that's your job as designer.
Create a structure that respects majority patterns from card sorting while incorporating usability best practices: limited top-level categories (5-7 maximum), clear category labels, logical hierarchy depth (2-3 levels maximum).
Test your designed structure with tree testing or usability testing before implementing. Card sorting reveals mental models; validation methods confirm your designed structure actually works.
Similarity matrices show what percentage of participants grouped each pair of items together. Items grouped together by 70%+ of participants clearly belong together. Items never grouped together clearly belong apart. Items grouped together by 30-50% of participants are ambiguous and need careful placement. When interpreting card sorting data, it's crucial to focus on understanding users' mental models, as this reveals how users naturally organize information and informs better content structure decisions.
Dendrograms visualize clustering. Items that branch together low in the tree have strong relationships. Items that only connect high in the tree have weak relationships. This helps identify natural category boundaries.
Category analysis for open sorting shows which category names participants created most frequently. If 60% of participants created a “Settings” category, that’s a strong signal. If category names vary widely for the same grouping, the concept needs clearer articulation. Previous research can guide your approach to analysis by highlighting which patterns or groupings have been validated in similar studies, helping you interpret ambiguous results more effectively.
Netflix analyzed card sorting results when restructuring their content browsing. They found 85% of participants grouped “Continue Watching” and “My List” together, suggesting these belonged in one section. But participants created 12 different names for this section: “My Stuff,” “My Content,” “Keep Watching,” “Personal,” etc. High agreement on grouping but low agreement on naming meant the concept was clear but needed better labeling. They tested multiple label options and landed on “My Netflix.”
Gathering high quality data throughout this process is essential for making reliable information architecture decisions.
To get the most out of your card sorting study, it’s important to follow a set of best practices that ensure reliable, actionable results:
Recruit the right participants: Aim for at least 15 participants for qualitative insights and 30 or more for quantitative studies. Make sure your participants reflect your actual user base.
Use clear, concise card labels: Avoid jargon, leading questions, or ambiguous terms. Each card should be easy to understand at a glance.
Limit the number of cards: Keep your card set between 30 and 50 items to prevent participant fatigue and maintain data quality.
Choose the right card sorting type: Decide whether open, closed, or hybrid card sorting best fits your research objectives and available resources.
Analyze the data effectively: Use card sorting tools and analysis methods to identify patterns, common groupings, and outliers in how users organize information.
Document your findings: Create a structured report that summarizes key insights and provides actionable recommendations for your information architecture.
By following these best practices, you’ll ensure your card sorting study delivers meaningful results that can directly inform your website or app’s structure.
Using vague or technical labels. If participants don’t understand what a card represents, they can’t sort it meaningfully. “IAM” means nothing to users; “User Permissions” does. Misleading categories can confuse users and undermine the effectiveness of your information architecture. Test card labels with a few participants before full study.
Including too many cards. More than 60 cards exhausts participants. They start making random decisions rather than thoughtful choices. Break large card sets into multiple focused studies.
Taking results too literally. Participants reveal mental models but aren’t UX designers. Don’t blindly implement whatever structure card sorting produces. Use results to inform design while applying usability principles.
Not segmenting by user type. Enterprise and SMB users often conceptualize products differently. Sorting them together produces averaged results that satisfy neither segment. Run separate studies or analyze results by segment.
Skipping validation. Card sorting reveals how users think, but doesn’t prove your designed structure works. Always validate with tree testing or usability testing.
Optimal Workshop is the industry standard with sophisticated analysis, dendrograms, similarity matrices, and clean participant experience. Costs $166-$366/month depending on study volume.
UserZoom offers card sorting plus tree testing and other IA research methods in one platform. Better for teams running comprehensive research programs. Pricing starts around $300-$500/month.
Maze includes card sorting alongside usability testing and prototype testing. Good for teams wanting multiple research methods without separate tools. Costs $99-$500/month based on features.
How many participants do you need for card sorting?
15-20 for open card sorting, 20-30 for closed card sorting. More participants provide statistical confidence but diminishing returns after these thresholds.
What's the difference between card sorting and tree testing?
Card sorting reveals how users group content. Tree testing validates whether users can find content in your proposed structure. Use card sorting first to design structure, then tree testing to validate it works.
Should you use open or closed card sorting?
Open when exploring how users naturally conceptualize content. Closed when validating a proposed structure. Often run open first, design based on results, then closed to validate.
How long does card sorting take?
15-25 minutes per participant for 30-60 cards. Plan 2-3 weeks total for recruiting, data collection, and analysis.
Can you do card sorting remotely?
Yes, digital card sorting tools enable remote participation. Remote actually works better than in-person for most studies because you can recruit more participants easily.
Once your card sorting study is complete, it’s time to turn insights into action. The next steps are crucial for translating user feedback into a more effective information architecture for your website or app.
Analyze the data: Review the results to identify clear patterns, common groupings, and any areas of disagreement among participants.
Reflect users’ mental models: Use the findings to create a new information architecture that aligns with how your users think and organize information.
Validate and refine existing categories: Adjust your current structure based on what you’ve learned, ensuring that existing categories make sense to your users.
Develop a user-centered menu structure: Design navigation that guides users intuitively, reducing friction and improving the overall experience.
Conduct further research: Consider follow-up studies, such as tree testing, to validate your new information architecture and ensure users can find what they need.
Implement and test: Roll out the updated structure and test it with real users to confirm it meets their needs and expectations.
By following these steps after your card sorting study, you’ll ensure your website or app’s menu structure is not only informed by users’ mental models but also validated through ongoing research and iteration.
Card sorting externalizes user mental models, revealing how people naturally organize information. This grounds navigation design in user thinking rather than internal assumptions about structure.
Open card sorting explores mental models without constraints. Closed card sorting validates proposed structures. Most projects benefit from running both: open to discover, closed to validate.
Recruit 15-20 participants for meaningful patterns. More participants increase confidence but analysis becomes more time-consuming. Focus on recruiting representative users rather than maximizing quantity.
Card sorting results inform design but don't dictate it. Use insights about groupings and mental models while applying UX principles to create final navigation structure. Always validate designed structure with tree testing or usability testing.
Digital tools make card sorting accessible and analysis manageable. Optimal Workshop, Maze, and UserZoom provide participant recruitment, study management, and automatic analysis. The $100-$300/month cost pays for itself in time saved versus manual analysis.
Need help planning your first card sort? Download our free Card Sorting Planning Template with card creation worksheets, participant recruitment scripts, and analysis frameworks.
Want expert guidance on information architecture research? Book a free 30-minute consultation with our UX research team to discuss your navigation challenges.
Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.
Book a demoJoin paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.
Sign up as an expert