Civic tech user research methods: how to test public-interest technology
Civic tech research prioritizes public inclusivity over profit. Learn the methods, recruitment strategies, and accessibility practices that work.
Civic technology is built to serve the public. Voting platforms, transit apps, benefits portals, emergency alert systems, open data dashboards. These tools exist because people need them, not because a market opportunity appeared on a slide deck.
That changes everything about how you do user research. Commercial products optimize for conversion and retention. Civic tech must optimize for access, trust, and equity across populations that include people who have been historically excluded from the design process entirely.
The teams that build civic tech well follow a principle that separates them from everyone else: build with communities, not for them. This guide covers the research methods, recruitment strategies, and accessibility practices that make that principle actionable.
Key takeaways
- Civic tech user research prioritizes public inclusivity and equity over market fit. Your participants must represent the full diversity of the population your product serves
- The “build with, not for” approach means community members participate in research design, not just as test subjects
- Standard usability testing, interviews, and surveys all apply, but recruitment and accessibility requirements are significantly higher than commercial research
- Accessibility audits using WCAG guidelines and tools like WAVE are not optional. They are a core research method in civic tech
- Compensate participants fairly. Many civic tech users are from low-income communities. Asking for free labor undermines the equity your product claims to support
- Integrate user research findings into public procurement processes (RFPs, vendor evaluations) to ensure usability requirements survive the contracting process
What is civic tech user research?
Civic tech user research evaluates the usability, accessibility, and effectiveness of digital tools designed for public use by governments, nonprofits, or civic organizations. It covers everything from a city’s 311 reporting app to a state benefits enrollment portal to a national voter registration platform.
What makes it distinct from standard user research is the audience. Commercial products serve customers who chose to use them. Civic tech serves everyone, including people who did not choose to interact with the system but must.
That means civic tech research must answer questions that commercial research rarely asks:
- Can a person with no smartphone complete this process?
- Can a non-English speaker navigate this form without help?
- Can someone with cognitive disabilities understand these instructions?
- Does this tool work on a slow connection in a rural area?
- Do people from historically marginalized communities trust this system enough to use it?
If your research does not answer these questions, it is not civic tech research. It is commercial UX research applied to a government product, and it will miss the users who need the service most.
How is civic tech research different from commercial UX research?
The methods overlap. The priorities do not.
| Dimension | Commercial UX research | Civic tech user research |
|---|---|---|
| Goal | Increase adoption, retention, revenue | Increase access, equity, public trust |
| Target users | Paying customers, defined market segments | Entire public, including marginalized populations |
| Recruitment | Panels, customer lists, social media ads | Community organizations, libraries, social services, in-person outreach |
| Accessibility | Nice to have, compliance-driven | Core requirement from day one |
| Compensation | Gift cards, account credits | Cash or equivalent. Many participants have low income |
| Success metric | Task completion, NPS, conversion rate | Equitable access, comprehension across literacy levels, trust |
| Design philosophy | Build for users | Build with communities |
| Language | Usually one primary language | Multilingual by default |
The biggest difference is who gets included. Commercial research can define a target audience and exclude everyone else. Civic tech cannot. A benefits portal that works perfectly for college-educated English speakers but fails for elderly non-English speakers with low digital literacy has not succeeded. It has failed the people who need it most.
What methods work best for civic tech user research?
The core user research methods apply to civic tech. The difference is how you adapt them for public-interest contexts.
Usability testing (the foundation)
Usability testing is the single most valuable method for civic tech. Run scenario-based tasks where participants attempt realistic goals: “You need to apply for rental assistance. Start from this homepage and complete the application.”
Civic tech adaptations:
- Test with participants across the full literacy spectrum, not just tech-savvy users
- Test on the devices your users actually have (older Android phones, not the latest iPhone)
- Test on slow connections that match rural or low-income broadband speeds
- Run sessions in multiple languages if your service has a multilingual audience
- Include participants who use assistive technology
Community interviews and contextual inquiry
User interviews in civic tech go beyond asking about feature preferences. You are trying to understand how people interact with government systems in their daily lives, what barriers they face, and what trust issues prevent them from using digital tools.
Contextual inquiry takes this further by observing people in their real environment. Visit a public library where people access government services. Sit in a benefits office waiting room. Watch how people actually navigate these systems with whatever devices and connectivity they have.
Surveys (with accessibility built in)
Survey research works for measuring satisfaction and identifying patterns at scale. For civic tech:
- Offer surveys in every language your service supports
- Provide phone-based and paper alternatives for people without internet access
- Use plain language (6th-8th grade reading level)
- Keep surveys under 5 minutes. Longer surveys exclude people with limited time or cognitive fatigue
Focus groups and community workshops
Bring together 6-10 community members to discuss their experiences with a service. In civic tech, these sessions double as community engagement. Participants are not just data sources. They are stakeholders whose input shapes public infrastructure.
What works: Partner with community-based organizations that already have trust with your target population. Hold sessions at community centers, churches, or libraries rather than government offices (which some populations distrust or find intimidating).
Secret shopper simulations
Have team members attempt to complete the full user journey as if they were a member of the public. Apply for the benefit. Call the help line. Try to do it on a phone. Try to do it in Spanish. This method catches obvious failures before real users encounter them.
Accessibility audits
Run automated scans with tools like WAVE, axe, or Lighthouse for WCAG compliance. Follow up with manual testing using screen readers (JAWS, NVDA, VoiceOver) and keyboard-only navigation. Automated tools catch about 30-40% of accessibility issues. The rest require human testing.
How to recruit participants for civic tech research
Civic tech recruitment is harder than commercial recruitment because the people you most need to include are the hardest to reach.
Where to find participants
- Community-based organizations (CBOs) that serve your target population. These groups have existing trust and access
- Public libraries. Many people without home internet use library computers for government services
- Social services offices. People in waiting rooms are actively interacting with government systems
- Houses of worship that serve immigrant, elderly, or low-income communities
- Schools and parent-teacher organizations for services that affect families
- The service itself. Add a recruitment banner or intercept survey to your live product
For more strategies on recruiting hard-to-reach participants, see our dedicated guide. General participant recruitment methods also apply with the adaptations below.
Recruitment rules for civic tech
Compensate fairly. Pay participants in cash or cash equivalents (prepaid Visa cards). Do not offer gift cards to stores that may not exist in their neighborhood. $50-$75 for a 60-minute session is a minimum. For hard-to-reach populations, go higher.
Remove access barriers. Offer sessions at times and locations that work for shift workers, caregivers, and people without reliable transportation. Provide childcare or allow children in sessions. Offer remote and in-person options.
Translate everything. Screening surveys, consent forms, task instructions, and compensation receipts should be available in every language your service supports.
Do not require technology. If a participant does not have a computer, provide one. If they do not have internet, offer an in-person session. If they cannot travel, go to them.
How to run accessibility and equity audits in civic tech
Accessibility is not a phase. It is a lens applied to every stage of research and design.
Accessibility audit process
- Automated scan. Run WAVE or axe on every page of the service. Document WCAG 2.1 Level AA failures
- Manual keyboard testing. Navigate the entire service using only a keyboard. Every interactive element must be reachable and operable
- Screen reader testing. Complete core tasks using JAWS (Windows), NVDA (Windows), and VoiceOver (Mac/iOS). Document where the screen reader experience breaks
- Cognitive accessibility review. Evaluate reading level, clarity of instructions, error messages, and recovery paths. Use plain language guidelines (target 6th-8th grade reading level)
- Mobile and low-bandwidth testing. Test on older devices with throttled connections
Equity audit process
Beyond accessibility, an equity audit examines whether the service creates disparate outcomes for different populations:
- Do completion rates differ by race, income, or geography?
- Does the service require documentation that certain populations are less likely to have?
- Are error messages and help resources available in all relevant languages?
- Does the service assume internet access, a permanent address, or a bank account?
Document findings as specific, testable issues with recommended fixes, not vague observations about “improving equity.”
How to integrate user research into public procurement
Research findings that do not influence procurement decisions get ignored when the contract is signed. Civic tech researchers must embed usability and accessibility requirements into the procurement process itself.
Where research fits in procurement
- RFP requirements. Include specific usability benchmarks (task completion rates, accessibility conformance levels) as evaluation criteria for vendors
- Vendor evaluation. Require vendors to demonstrate user research capabilities and show evidence of testing with diverse populations
- Contract milestones. Tie payment milestones to usability testing results, not just feature delivery
- Acceptance criteria. Define accessibility and usability standards that must be met before the government accepts the final product
This ensures that user research is not something that happens after the product is built. It becomes a contractual requirement throughout development.
Common mistakes in civic tech user research
Testing only with digitally literate users. If your participant pool is all smartphone-owning, English-speaking, college-educated adults, you are not testing civic tech. You are testing commercial tech that happens to be funded by the government.
Treating accessibility as a final checklist. Running a WAVE scan the week before launch catches surface-level issues. It does not catch the screen reader experience that makes a form unusable, or the reading level that excludes half your audience.
Not compensating participants. Asking low-income community members to donate their time for research that benefits a government agency is extractive. Pay people fairly.
Holding sessions only in government offices. Many people distrust government buildings. Libraries, community centers, and partner organization spaces are more neutral and accessible.
Collecting research and not sharing it back. Communities that participate in research deserve to know what changed because of their input. Close the loop. Publish findings in accessible formats. Return to the community and show what their participation produced.
Frequently asked questions
What is the difference between civic tech and govtech?
Govtech refers to technology used internally by government agencies (case management systems, employee tools). Civic tech refers to technology that serves the public directly (voting platforms, benefits portals, transit apps). User research for civic tech prioritizes public accessibility and equity. Research for government digital services focuses on both internal efficiency and public-facing usability.
How many participants do you need for civic tech usability testing?
Five to eight participants per demographic segment, not total. If your service serves English and Spanish speakers, test with 5-8 per language. If you are testing with assistive technology users, include 3-5 per round. Multiple small rounds are better than one large study.
Can you use commercial research platforms for civic tech research?
Yes, but they have limitations. Commercial platforms like usability testing tools and remote research platforms work for digitally literate participants. For populations with limited internet access or low digital literacy, in-person or phone-based research is more effective.
How do you measure success in civic tech user research?
Measure equitable access, not just average performance. Track task completion rates by demographic group, not just overall. A 90% overall completion rate that hides a 40% rate among non-English speakers is a failure. Also measure comprehension (do users understand what happened?), trust (do users believe the system worked correctly?), and time-on-task across population segments.