User Research

Best card sorting tools for UX research in 2026: 10 platforms ranked

Compare 10 best card sorting tools for UX researchers in 2026. Optimal Workshop, Maze, Lyssna, UXtweak, UserZoom, and more, ranked by analysis depth, recruitment, and price.

CleverX Team ·
Best card sorting tools for UX research in 2026: 10 platforms ranked

The best card sorting tools for UX research in 2026 are Optimal Workshop’s OptimalSort as the category-leading specialist (deepest analysis with similarity matrices, agreement scores, dendrograms, and standardization scores), Maze for combined card sorting + tree testing + unmoderated usability in one platform, Lyssna for accessible self-serve card sorting at startup-friendly pricing, and UXtweak for full-stack UX research suites that include card sorting alongside other methods. UserTesting, UserZoom, Userlytics, Proven By Users, CardZoo, and Miro/Conceptboard cover specialist niches from enterprise integration to DIY collaborative card sorts. For most UXR teams, the right shortlist is one specialist (Optimal Workshop or Lyssna) plus one suite (Maze or UXtweak) for combined IA research workflows.

This guide ranks 10 card sorting tools on the criteria that actually matter for UX researchers running open, closed, and hybrid card sorts: analysis depth (similarity matrices, dendrograms, standardization scores), participant recruitment options, multi-method integration (tree testing pairing), pricing tiers for solo to enterprise teams, and real-world UX of running a study end-to-end.

TL;DR: best card sorting tools for UX research in 2026

  • Optimal Workshop (OptimalSort): deepest analysis, category leader, integrated with Treejack for tree testing.
  • Maze: best combined card sorting + tree testing + unmoderated usability in one platform.
  • Lyssna: accessible self-serve at startup-friendly pricing, formerly UsabilityHub.
  • UXtweak: full-stack UX research suite with strong card sorting + tree testing.
  • UserTesting: enterprise UX research platform with card sorting included.
  • UserZoom (UserTesting): enterprise integrations + card sorting at scale.
  • Userlytics: moderated + unmoderated platform with card sorting capability.
  • Proven By Users: focused card sorting + tree testing tool.
  • CardZoo: light-weight focused card sorting tool.
  • Miro / Conceptboard: DIY collaborative card sorting for in-person workshops.

How to evaluate card sorting tools

Six criteria matter for UX research card sorting:

  1. Analysis depth. Similarity matrices, agreement scores, dendrograms, standardization scores. Depth varies 5-10? across tools.
  2. Sort types supported. Open (participants name categories), closed (researchers define categories), hybrid. Most tools support all three; some don’t.
  3. Tree testing integration. Card sorting and tree testing are usually paired in IA research. Tools that do both well save substantial workflow time.
  4. Participant recruitment. Built-in panel access vs BYOA. Solo researchers benefit from built-in panels; established UXR teams already have recruitment.
  5. Pricing tiers. $50-$200/mo for solo, $300-$1,000/mo for teams, custom for enterprise. Match tier to study volume.
  6. End-to-end UX of running a study. Setup, distribution, analysis, reporting ? varies substantially across tools.

Most UXR teams over-evaluate analysis depth and under-evaluate participant recruitment. Even the best analysis is useless without participants.

Quick comparison: 10 best card sorting tools in 2026

ToolAnalysis depthTree testingRecruitmentPricing tier
Optimal WorkshopDeepestYes (Treejack)Limited built-in$200-$1,500/mo
MazeStrongYesLight panel$99-$500/mo
LyssnaMidYesBuilt-in panel$89-$300/mo
UXtweakStrongYesBuilt-in$90-$500/mo
UserTestingMidLimitedStrong (Contributor Network)Enterprise
UserZoomMid-strongYesBuilt-inEnterprise
UserlyticsMidLimitedBuilt-in$300-$1,000/mo
Proven By UsersStrongYesLimited$50-$200/mo
CardZooLightNoLimited$30-$100/mo
Miro / ConceptboardDIYDIYNone$0-$30/user/mo

The biggest variation is between specialist tools (Optimal Workshop, Proven By Users) with deeper analysis and suites (Maze, UXtweak) with broader feature coverage at the cost of analytical depth.

1. Optimal Workshop (OptimalSort)

The category leader. OptimalSort has been the standard for serious card sorting since the early 2010s. Paired with Treejack (tree testing) and Reframer (qualitative research) for full IA research workflows.

Best for. Established UXR teams, IA research at scale, complex card sorts requiring deep analysis.

Strengths. Deepest analysis (similarity matrices, agreement scores, dendrograms, standardization scores). Mature tooling. Strong reporting.

Limits. Higher price tier. Limited built-in panel (BYOA-leaning).

Pricing. Plans typically $200-$1,500/mo depending on study volume.

For Optimal Workshop alternatives, see the comparison guide.

2. Maze

Maze combines card sorting with tree testing, unmoderated usability, prototypes, and surveys. Single-platform appeal for design-led teams running multiple research methods.

Best for. Design-led teams already on Maze, mid-budget UXR teams, combined IA + usability workflows.

Strengths. Single platform. Good UX for setup. Light panel access.

Limits. Analysis depth is mid-tier compared to Optimal Workshop.

Pricing. Starts ~$99/mo for solo; team plans scale up.

3. Lyssna (formerly UsabilityHub)

Lyssna offers accessible self-serve card sorting alongside first-click testing, design surveys, and tree testing. Built-in panel access for fast recruit.

Best for. Solo UXR, startup teams, fast turnaround studies on common audiences.

Strengths. Built-in panel. Self-serve UX. Startup-friendly pricing.

Limits. Analysis depth is mid-tier. Less suitable for complex enterprise IA research.

Pricing. Starts ~$89/mo.

For Lyssna alternatives, see the comparison.

4. UXtweak

UXtweak is a full-stack UX research suite with card sorting, tree testing, usability, surveys, and analytics. Strong specifically for IA research workflows.

Best for. Mid-market UXR teams, multi-method research programs, IA-focused research.

Strengths. Full-stack suite. Strong card sorting + tree testing combo. Built-in panel.

Limits. Newer brand than Optimal Workshop; less ecosystem maturity.

Pricing. Starts ~$90/mo for solo.

5. UserTesting

UserTesting includes card sorting as part of its broader unmoderated and moderated UX research platform. Best for enterprise teams already on UserTesting.

Best for. Enterprise UX research programs, teams already on UserTesting Contributor Network.

Strengths. Massive consumer panel (Contributor Network). Integrated with broader research. Enterprise integrations.

Limits. Card sorting analysis is not best-in-class. Enterprise pricing.

Pricing. Enterprise plans, typically annual.

6. UserZoom (now part of UserTesting)

UserZoom offers enterprise UX research with card sorting and tree testing. Now under UserTesting umbrella; integrations being unified.

Best for. Enterprise teams, complex multi-stakeholder IA research, integration-heavy environments.

Strengths. Enterprise integrations. Strong tree testing alongside card sorting. Mature panel.

Limits. Enterprise pricing. Brand consolidation under UserTesting in flux.

Pricing. Enterprise plans.

7. Userlytics

Userlytics is a moderated + unmoderated UX research platform. Card sorting is a layered capability rather than core focus.

Best for. Teams already on Userlytics for usability testing wanting card sorting in the same platform.

Strengths. Single platform for usability + card sorting. Built-in panel.

Limits. Less analytical depth than specialists. Card sorting is secondary feature.

Pricing. $300-$1,000/mo team plans.

8. Proven By Users

Proven By Users is a focused card sorting + tree testing tool. Strong analytical depth at lower price tier than Optimal Workshop.

Best for. UXR teams wanting strong analysis at mid-budget. IA-specific research without need for full UX suite.

Strengths. Strong analysis. Focused tool ? no feature bloat. Mid-budget.

Limits. Limited built-in recruitment. Smaller user community.

Pricing. $50-$200/mo.

9. CardZoo

CardZoo is a lightweight, focused card sorting tool. Best for occasional, simple studies.

Best for. Solo UXR with occasional card sort needs, simple studies, lowest budget.

Strengths. Cheapest paid option. Focused. Easy setup.

Limits. Light analysis. No tree testing. Limited recruitment.

Pricing. $30-$100/mo.

10. Miro / Conceptboard

For in-person workshops or distributed team collaboration, Miro and Conceptboard support DIY card sorting via collaborative whiteboards. Not a research tool per se but used for collaborative IA work.

Best for. In-person workshops, design team collaboration, exploratory pre-research IA work.

Strengths. Cheapest. Collaborative. Visual.

Limits. No analysis. No formal study workflow. Manual everything.

Pricing. $0-$30/user/mo.

When to use which: the picker

Use caseFirst choiceSecond choice
Enterprise IA research with deep analysisOptimal WorkshopUserZoom
Mid-market multi-method UXRMazeUXtweak
Solo / startup UXRLyssnaMaze
Combined card sort + tree test workflowOptimal Workshop or UXtweakMaze
Already on UserTestingUserTesting nativeOptimal Workshop
Tight budget, basic needsCardZooLyssna
In-person workshop / collaborativeMiro / ConceptboardDIY pen-and-paper
Card sorting analysis puristOptimal WorkshopProven By Users

For most UXR teams, the realistic stack is one specialist (Optimal Workshop or Proven By Users) plus one suite (Maze or UXtweak) for combined IA + adjacent methods.

Open vs closed vs hybrid card sorts

A reminder on sort types since tool capabilities vary:

  • Open card sort. Participants name categories. Good for early IA exploration. Requires open-ended analysis (similarity matrices).
  • Closed card sort. Researchers define categories. Participants assign cards. Good for validating proposed IA. Faster analysis.
  • Hybrid card sort. Participants can use given categories or create their own. Best of both, more analysis complexity.

All major tools (Optimal Workshop, Maze, Lyssna, UXtweak, etc.) support all three. Some lighter tools (CardZoo) support only closed.

For card sorting methodology, see the methodology guide.

Frequently asked questions

What’s the difference between open, closed, and hybrid card sorting?

Open: participants name categories (early IA exploration). Closed: researchers define categories, participants assign (validation). Hybrid: participants use given categories or create their own (best of both).

How many participants do I need for card sorting?

15-30 typical for open card sorts to surface stable patterns. 5-10 for closed card sorts (validation). Statistical significance for IA decisions usually needs 30+ but practitioners often work with 15-20.

Should I pair card sorting with tree testing?

Almost always. Card sorting reveals how users group items; tree testing validates that the resulting IA actually helps users find items. Pair them in the same study or sequence (card sort ? IA design ? tree test).

Which card sorting tool has the best analysis?

Optimal Workshop’s OptimalSort has the deepest analysis (similarity matrices, agreement scores, dendrograms, standardization scores). Proven By Users is close at lower cost.

Can I use Miro or Conceptboard for card sorting?

For in-person workshops or collaborative IA work, yes. For research at scale (online participants, statistical analysis), no. Use a dedicated card sorting tool for research.

What’s the cheapest card sorting tool that works for real research?

CardZoo at $30-$100/mo for occasional simple studies. Lyssna at $89/mo for solo UXR with built-in panel. Below this, you’re typically in DIY territory (Miro / pen-and-paper).

Do I need a built-in panel for card sorting research?

Solo UXR and startup teams benefit from built-in panels (Lyssna, UXtweak). Established UXR teams typically have recruitment relationships and can BYOA (Optimal Workshop, Proven By Users).

How do I evaluate card sorting tools for my team?

Pilot 2-3 tools on a real card sort with your audience. Evaluate analysis depth, end-to-end UX, recruitment fit, and pricing. Don’t trust feature lists alone; running a real study reveals what matters.

The takeaway

Card sorting tools split into specialists (Optimal Workshop, Proven By Users) with deepest analysis, suites (Maze, UXtweak) with broader feature coverage, and lightweight tools (Lyssna, CardZoo) for solo or simple studies. Most UXR teams need one specialist + one suite ? single-tool stacks leave gaps.

The realistic stack: Optimal Workshop or Proven By Users for serious IA research; Maze or UXtweak for combined IA + usability + surveys; Lyssna for solo/startup teams on a budget; Miro/Conceptboard for in-person workshops. Pair card sorting with tree testing for end-to-end IA research.

The single biggest card sorting mistake is choosing tools by feature checklists. Pilot with your real audience and evaluate end-to-end study workflow. Tools that demo well in marketing material can be painful in actual fielding; tools with quieter marketing can be far more practical.