User research in Government: A complete guide to UX research for public sector digital services
Learn how government teams conduct user research for digital services. Covers methods, Section 508 compliance, participant recruitment, the Paperwork Reduction Act, and common challenges.
Most government digital services are built for millions of people who have no choice but to use them. Filing taxes, applying for benefits, renewing a license. There is no competitor. There is no alternative.
That makes user research in government more important than in the private sector, not less. When a commercial app has bad UX, users switch to a competitor. When a government service has bad UX, people miss benefits they qualify for, waste hours on hold, or give up entirely.
Yet government teams face constraints that private-sector researchers rarely encounter. Strict procurement rules, accessibility mandates, the Paperwork Reduction Act, and recruiting participants who are often hard to reach. This guide covers how government teams do user research effectively within those constraints.
Key takeaways
- Government user research follows the same core methods as private sector (interviews, usability testing, surveys) but operates under additional legal and compliance requirements
- Section 508 and WCAG compliance are not optional. Every research study should include participants who use assistive technology
- The Paperwork Reduction Act (PRA) requires OMB approval before collecting standardized information from 10 or more members of the public, but usability testing and direct observation are exempt
- Recruiting government employees and citizens for research requires different strategies than recruiting commercial users. Security clearances, union rules, and access restrictions add layers of complexity
- Embed researchers in agile delivery teams from discovery through launch. Research that happens only at the beginning or end of a project misses the iterative improvements that make services usable
How do government teams do user research?
Government teams conduct user research using the same foundational methods as any product team. Interviews, usability testing, surveys, field studies, and analytics. The difference is in how those methods are applied within the public sector’s regulatory and organizational constraints.
The most effective government research programs embed researchers directly into agile delivery teams. The UK’s Government Digital Service (GDS) pioneered this model, placing user researchers alongside designers, developers, and product managers throughout the service lifecycle.
Research happens in phases:
- Discovery. Map user needs, behaviors, and pain points through interviews and observation before building anything
- Alpha. Test early prototypes with real users to validate assumptions and identify problems
- Beta. Run usability testing on working services with larger, more diverse participant groups
- Live. Monitor analytics, collect feedback, and run periodic usability studies to catch emerging issues
Each phase produces findings that directly inform the next sprint’s priorities. Research that gets filed in a report and forgotten is research that failed.
Exposure hours build team empathy
One practice that distinguishes strong government research teams is “exposure hours.” Every team member, including developers and project managers, observes at least 2 hours of user research sessions every 6 weeks. This builds empathy across the team and reduces the gap between “what the team assumes” and “what users actually do.”
What makes government user research different from private sector?
Five factors make government research fundamentally different from commercial product research.
1. Legal compliance requirements. Section 508, the Paperwork Reduction Act, FISMA security requirements, and agency-specific policies all constrain how research can be planned and conducted.
2. No competitive pressure. Government services are often monopolies. Users cannot switch to a competitor, which means poor UX does not show up as churn. It shows up as call center volume, error rates, and public frustration that is harder to measure.
3. Extreme user diversity. Government services must work for everyone. That includes people with disabilities, limited English proficiency, low digital literacy, no internet access, and cognitive impairments. The range of user needs is far wider than most commercial products.
4. Bureaucratic procurement and approval. Getting budget approval, tool procurement, and participant access often takes months. Researchers need to plan further ahead and work within rigid procurement cycles.
5. Organizational silos. Government agencies operate in silos. The team building a service may not control the call center, policy team, or IT infrastructure that the service depends on. Research findings often implicate systems and teams outside the researcher’s authority.
What methods work best for public sector UX research?
The best government research programs use a mix of qualitative and quantitative methods matched to the project phase.
| Method | Best for | Government considerations |
|---|---|---|
| User interviews | Discovery, understanding needs and mental models | May require PRA clearance if using standardized questions with 10+ people |
| Usability testing | Alpha, beta, and live phases | PRA exempt. Best method for iterative improvement |
| Field studies / contextual inquiry | Understanding real-world usage context | Security and access restrictions may limit observation in government offices |
| Surveys | Measuring satisfaction, benchmarking at scale | Requires PRA clearance. Plan 6-12 months for OMB approval |
| Analytics review | Live phase monitoring, identifying pain points | Often limited by legacy systems and data silos |
| Card sorting | Information architecture for public-facing sites | Works well for reorganizing complex government content |
| Accessibility testing | Every phase | Required, not optional. Include assistive technology users |
Usability testing is the workhorse. It is PRA exempt, produces actionable findings quickly, and works in every project phase. Most government research teams run usability sessions every 1-2 sprints.
Qualitative methods are essential in discovery because government users often have needs that quantitative data alone cannot reveal. A survey might show that 30% of applicants abandon a benefits form at step 4. Only an interview or usability test reveals that step 4 uses legal language that applicants do not understand.
How to handle Section 508 and WCAG compliance in user research
Section 508 of the Rehabilitation Act requires federal agencies to make electronic and information technology accessible to people with disabilities. This is not a suggestion. It is a legal requirement that applies to every digital service, including the research process itself.
What researchers must do
Include participants with disabilities in every study. Do not treat accessibility testing as a separate activity. Include participants who use screen readers, voice navigation, switch devices, and screen magnification in your regular usability studies.
Test with actual assistive technology. Do not simulate. Real users interact with technology in ways that simulation cannot replicate. A screen reader user’s navigation patterns are fundamentally different from a sighted user’s.
Meet WCAG 2.1 AA as a minimum. The Web Content Accessibility Guidelines provide specific, testable criteria for accessible design. Government services must meet at least Level AA conformance.
Accessibility research checklist
- At least 1 in 5 participants uses assistive technology
- Test environment supports screen readers (JAWS, NVDA, VoiceOver)
- Research materials (consent forms, task instructions) are accessible
- Remote testing platform supports assistive technology if testing remotely
- Findings include specific WCAG success criteria that pass or fail
- Remediation recommendations reference Section 508 standards
For a deeper walkthrough, see our guide on accessibility testing methods.
How to recruit government employees and citizens for user research
Recruiting participants for government research is harder than commercial recruitment. Government employees have restrictions on how they spend work time. Citizens who use government services are often hard-to-reach populations with limited digital access.
Recruiting government employees
Government employees face constraints that commercial participants do not:
- Union rules may require approval before employees participate in research during work hours
- Security clearances limit who can access certain systems and environments
- Manager approval is typically required and can take weeks
- Time constraints are severe. Government employees often have rigid schedules with no flexibility for a 60-minute research session
What works: Partner with agency HR or digital transformation offices. They can identify willing participants and handle internal approvals. Offer flexible scheduling (early morning, lunch hour, end of day). Keep sessions to 30-45 minutes.
Recruiting citizens and public users
Citizens who use government services are diverse in ways that commercial user bases rarely are:
- People with limited internet access or no smartphone
- People with low digital literacy
- Non-English speakers
- People with disabilities
- Elderly populations
- People in rural areas
What works: Recruit through the service itself (intercept users at government offices, add a feedback link to the digital service). Partner with community organizations, libraries, and social services agencies that serve your target population. Use phone-based research for participants without reliable internet.
For general participant recruitment strategies that apply across contexts, see our recruitment guide. If your audience is particularly specialized, our guide on recruiting niche research participants covers advanced sourcing strategies.
How to navigate the Paperwork Reduction Act in user research
The Paperwork Reduction Act (PRA) is the regulation that most often blocks or delays government user research. Understanding what it does and does not require saves months of wasted time.
What the PRA actually says
The PRA requires Office of Management and Budget (OMB) approval before a federal agency collects “substantially similar” information from 10 or more members of the public. This was designed to reduce the burden of government paperwork on citizens.
What requires PRA clearance
- Surveys sent to 10 or more members of the public
- Standardized questionnaires or structured interviews with 10+ public participants
- Online feedback forms that collect structured data at scale
What is PRA exempt
- Usability testing with direct observation (watching users complete tasks)
- Unstructured interviews where questions vary by participant
- Feedback collected from fewer than 10 people in any single collection
- Research with government employees (they are not “members of the public” under PRA)
- Voluntary customer satisfaction surveys under the generic clearance many agencies hold
The US Digital Service (USDS) has published detailed guidance on PRA exemptions for user research. Read it before assuming your study needs clearance.
Practical advice
Design your research around PRA-exempt methods whenever possible. Usability testing with direct observation is the most powerful PRA-exempt method available. If you need a survey, check if your agency has a generic clearance that covers customer feedback collection. If not, budget 6-12 months for the OMB approval process.
Common challenges in government UX research and how to solve them
| Challenge | Why it happens | How to solve it |
|---|---|---|
| Limited budget for research | Procurement cycles prioritize technology over research | Frame research as risk mitigation. One usability study costs less than one failed launch |
| Manager resistance | Leaders unfamiliar with user research see it as a delay | Invite stakeholders to observe sessions. Seeing real users struggle changes minds faster than any presentation |
| Legacy systems block iteration | Services depend on outdated platforms that are hard to modify | Focus research on the interaction layer (content, forms, navigation) that can change without rebuilding backend systems |
| Difficulty accessing diverse users | Government users span every demographic, ability, and literacy level | Partner with community organizations, libraries, and advocacy groups that already serve target populations |
| Slow approval processes | PRA, procurement, and security reviews take months | Plan research 2-3 quarters ahead. Use PRA-exempt methods for iterative work |
| Siloed teams | Findings affect systems outside the researcher’s control | Share research broadly (research walls, video clips, cross-team workshops). Make findings impossible to ignore |
| Proving ROI to leadership | No competitor pressure means no obvious cost of bad UX | Measure call center volume, error rates, task completion rates, and time-on-task. These translate to dollars |
How government research standards compare globally
Different countries have established frameworks for government user research. Learning from all of them strengthens your practice.
United States. Digital.gov provides federal guidance. The USDS and 18F embed researchers in agency teams. Section 508 drives accessibility requirements.
United Kingdom. The GOV.UK Service Manual sets the global standard for government user research. The GDS model of embedding researchers in multidisciplinary teams has been adopted worldwide.
Australia. The Digital Transformation Agency publishes research guidance and mandates accessibility under the Disability Discrimination Act.
Canada. The Canadian Digital Service follows a similar model to GDS, with bilingual (English/French) research requirements adding complexity.
Each framework shares a common principle: build services based on observed user needs, not assumptions about what users need.
Frequently asked questions
Does government user research require IRB approval?
Not typically. Institutional Review Board approval applies to academic research and clinical trials. Government UX research to improve digital services is operational, not academic. However, if you are partnering with a university or conducting research on sensitive topics (health, benefits eligibility), check with your agency’s legal team.
Can government researchers use commercial research tools like UserTesting or Maze?
Yes, but procurement adds time. Many agencies have approved tool lists or blanket purchase agreements. Check with your agency’s IT procurement team before purchasing. Open-source alternatives and tools with FedRAMP authorization are often faster to approve.
How many participants do you need for government usability testing?
Five to eight participants per round catches roughly 80% of usability issues. For accessibility-specific testing, include at least 2-3 participants who use assistive technology per round. Run multiple rounds as the service evolves rather than one large study.
Is user research different for state and local government versus federal?
The methods are the same, but the regulations differ. State and local governments are not subject to the PRA or Section 508 (though many states have equivalent accessibility laws). Budget constraints are often tighter, and digital maturity varies widely across jurisdictions.