User Research

User research accessibility compliance guide: Section 508, WCAG, and ADA for researchers

How to run user research that meets Section 508, WCAG, and ADA requirements. Includes checklists, recruitment tips, and testing best practices.

CleverX Team ·
User research accessibility compliance guide: Section 508, WCAG, and ADA for researchers

Accessibility compliance in user research is not just about testing whether a product meets WCAG standards. It starts earlier than that. Your research process itself must be accessible. If a participant who uses a screen reader cannot complete your consent form, or a participant with a motor impairment cannot navigate your prototype, you have excluded the people whose feedback matters most.

Three frameworks govern accessibility compliance for researchers working with U.S. digital products: Section 508, WCAG, and ADA. Each has different scope, requirements, and implications for how you plan, recruit, test, and report. This guide breaks down what each requires from researchers specifically, not just from designers and developers.

Key takeaways

  • Section 508 applies to federal agencies and requires testing ICT with people with disabilities using Functional Performance Criteria. It is a legal mandate, not a best practice
  • WCAG 2.1 Level AA is the standard most organizations must meet. WCAG 2.2 adds criteria for cognitive accessibility and mobile interaction
  • ADA Title III does not mandate specific testing methods but courts consistently reference WCAG AA as the benchmark for compliance
  • Recruit participants based on assistive technology use and functional abilities, not diagnoses. Over-recruit by 15-25% to account for no-shows
  • Your research materials (consent forms, screeners, prototypes, task scripts) must be accessible before you test. An inaccessible research session produces unreliable accessibility data
  • Combine automated tools (WAVE, axe), expert audits, and real-user testing. Automated scans catch 30-40% of issues. Human testing catches the rest

What accessibility standards apply to user research?

Three overlapping frameworks define accessibility compliance for digital products in the United States. Each affects user research differently.

Section 508 of the Rehabilitation Act

Section 508 requires federal agencies to make electronic and information technology (ICT) accessible to people with disabilities. It applies to all federal agencies and any organization that receives federal funding or sells technology to the federal government.

What it means for researchers:

  • You must recruit participants with disabilities and test with them, not just run automated scans
  • Testing must follow Functional Performance Criteria (36 CFR 1194.41) that define how users with specific functional limitations interact with technology
  • Recruit based on abilities (low vision, limited mobility, cognitive impairment) rather than medical diagnoses
  • Section 508 testing is required before procurement, during development, and before launch

WCAG 2.1 and 2.2 (Web Content Accessibility Guidelines)

WCAG is published by the W3C and provides specific, testable success criteria organized into three conformance levels:

LevelWhat it coversWho needs it
Level ABasic accessibility (alt text, keyboard access, captions)Minimum for any public-facing site
Level AAStandard accessibility (color contrast, resize text, consistent navigation)Required by Section 508, ADA, and most organizational policies
Level AAAAdvanced accessibility (sign language for video, simplified reading level)Aspirational. Few sites achieve full AAA

WCAG is built on four principles. All content must be:

  1. Perceivable. Users can see, hear, or otherwise sense all content (alt text, captions, sufficient contrast)
  2. Operable. Users can navigate and interact using keyboard, voice, or assistive devices
  3. Understandable. Content and interface behavior are clear and predictable
  4. Robust. Content works across browsers, devices, and assistive technologies

WCAG 2.2 additions (published 2023):

  • Dragging movements must have single-pointer alternatives
  • Minimum target size of 24x24 CSS pixels for interactive elements
  • Consistent help mechanisms across pages
  • Redundant entry prevention (do not force users to re-enter information already provided)

These additions matter for usability testing because they affect how users with motor and cognitive impairments interact with interfaces.

ADA Title III

The Americans with Disabilities Act Title III covers public accommodations, which courts have increasingly interpreted to include websites and mobile apps. There is no explicit ADA testing mandate, but Department of Justice guidance and court rulings consistently reference WCAG 2.1 Level AA as the compliance standard.

What it means for researchers: ADA does not tell you how to test. But if a lawsuit is filed, the court will look at whether the product meets WCAG AA. User research that includes participants with disabilities and documents WCAG conformance is the strongest evidence of compliance effort.

Accessibility compliance checklist for user research

Use this checklist for every research study. Items are grouped by phase.

Planning phase

  • Define which accessibility standards apply (Section 508, WCAG level, ADA)
  • Include accessibility-specific research questions in your study plan
  • Budget for accessibility accommodations (interpreters, assistive tech, additional session time)
  • Choose a research platform that supports assistive technology (screen readers, keyboard navigation, captions)
  • Allocate 50% more time per session for participants using assistive technology

Research materials

  • Consent forms are available in plain language (6th-8th grade reading level), large print, and digital accessible format
  • Screener surveys are keyboard-navigable and screen reader compatible
  • Task scripts use clear, simple language without jargon
  • Prototypes are keyboard-navigable (at minimum) before testing begins
  • Prototypes have sufficient color contrast (4.5:1 for normal text, 3:1 for large text)
  • All images in research materials have alt text
  • Video content has captions and audio descriptions where needed

Recruitment

  • Recruitment screener asks about assistive technology use, not medical conditions
  • At least 1 in 5 participants uses assistive technology (screen readers, voice control, switch devices, magnification)
  • Participants represent multiple disability types (visual, auditory, motor, cognitive)
  • Over-recruited by 15-25% to account for higher no-show rates
  • Compensation is equal to or higher than non-disabled participants (not lower because sessions may be shorter)
  • Recruitment channels include disability organizations, accessibility communities, and assistive tech user groups

Testing session

  • Testing environment supports participant’s assistive technology
  • Physical venue (if in-person) is wheelchair accessible with accessible parking and restrooms
  • ASL interpreter or CART captioning available for deaf/hard-of-hearing participants
  • Remote sessions use a platform compatible with screen readers and keyboard navigation
  • Moderator trained on disability etiquette and adaptive facilitation techniques
  • Session length is flexible. Allow extra time without penalizing participants
  • Participant can use their own device and assistive technology if preferred

Analysis and reporting

  • Findings reference specific WCAG success criteria that pass or fail
  • Severity ratings account for impact on assistive technology users (a “minor” visual issue may be a “blocking” screen reader issue)
  • Even single-user accessibility issues are documented (one participant blocked = potentially thousands of real users blocked)
  • Report includes specific remediation recommendations tied to WCAG criteria
  • Findings report itself is accessible (proper heading structure, alt text, readable format)

How to recruit participants with disabilities for user research

Recruiting participants with disabilities requires different channels, different screening approaches, and different logistics than standard participant recruitment.

Where to find participants

ChannelWho you reachNotes
National Federation of the Blind (NFB)Screen reader users, low visionWell-organized community, responsive to research requests
National Association of the Deaf (NAD)Deaf and hard-of-hearing usersRequire ASL interpretation for sessions
Assistive technology user groupsPower users of specific AT (JAWS, Dragon, Switch)Technical and experienced with research
Disability-focused nonprofitsBroad range of disabilitiesGood for cognitive and developmental disabilities
University disability servicesStudents with documented disabilitiesYounger demographic, tech-savvy
Verified user research panelsPre-screened professionals with specific assistive tech experienceFaster recruitment with verified profiles
Accessibility consultanciesProfessional accessibility testersExpensive but highly knowledgeable

For more on sourcing strategies for specialized audiences, see our guide on recruiting niche research participants.

Screening best practices

Ask about technology use, not diagnoses.

Good: “Do you use a screen reader to navigate websites? If so, which one (JAWS, NVDA, VoiceOver, other)?”

Bad: “Do you have a visual impairment? Please describe your condition.”

Screen for functional abilities that match your testing needs:

  • Screen reader users (for visual accessibility testing)
  • Keyboard-only navigation users (for motor accessibility testing)
  • Voice control users (Dragon NaturallySpeaking, Voice Control)
  • Screen magnification users (ZoomText, built-in OS magnification)
  • Switch device users (for severe motor impairments)
  • Users with cognitive disabilities (for plain language and comprehension testing)

Over-recruit by 15-25%. Participants with disabilities have higher cancellation and no-show rates due to health variability, transportation challenges, and caregiver schedules. This is not unreliability. It is the reality of living with a disability. Plan for it without judgment.

Compensation and logistics

Pay participants with disabilities at least the same rate as other participants. Many accessibility researchers recommend a premium ($75-100 for 60 minutes vs a standard $50-75) because:

  • Sessions often require more preparation from the participant (setting up assistive tech, arranging transportation)
  • The participant’s specialized expertise has market value
  • Fair compensation signals respect and attracts experienced participants

Provide session details in accessible formats. Include information about physical accessibility of the venue, availability of parking, restroom accessibility, and whether ASL interpretation or CART will be provided.

How to combine automated tools, expert audits, and user testing

No single method catches all accessibility issues. The most reliable approach combines three layers.

Layer 1: Automated scanning (catches 30-40% of issues)

ToolWhat it doesCost
WAVEBrowser extension that flags WCAG violations visuallyFree
axe DevToolsBrowser extension with detailed WCAG violation reportsFree (basic), paid (advanced)
LighthouseBuilt into Chrome DevTools, includes accessibility auditFree
Pa11yCommand-line tool for CI/CD pipeline integrationFree, open source

Run automated scans first. Fix all flagged issues before recruiting participants. Testing a product with known automated failures wastes participant time and produces unreliable data.

Layer 2: Expert audit (catches 60-70% of issues)

An accessibility expert manually reviews the product against WCAG success criteria. This catches issues that automated tools miss:

  • Logical reading order for screen readers
  • Meaningful link text (not “click here”)
  • Correct use of ARIA labels
  • Form error handling and recovery
  • Focus management in dynamic content
  • Cognitive load and plain language

Layer 3: User testing with assistive technology (catches 80-95% of issues)

Real users with disabilities testing real tasks on real assistive technology. This is the only method that reveals how the product actually works in practice.

What user testing catches that automation and expert review miss:

  • Interaction patterns unique to specific assistive technology (JAWS handles tables differently than NVDA)
  • Cognitive barriers in content comprehension
  • Frustration points and workarounds that experienced AT users have developed
  • Trust and confidence issues (“I am not sure if my form submitted correctly”)
  • Real-world device and browser combinations

How to run accessible usability testing sessions

Moderation techniques for accessible sessions

Let participants use their own setup. Participants who use assistive technology daily have configurations, shortcuts, and workflows optimized for their needs. Asking them to use an unfamiliar computer or a research lab setup introduces friction that has nothing to do with your product.

Allow extra time. A task that takes a sighted user 3 minutes may take a screen reader user 8-10 minutes. This is normal. Do not rush participants or interpret slower completion as a usability failure unless the participant expresses frustration.

Ask about the experience, not the disability. Focus on what happened during the task: “I noticed you paused at that form field. What was happening there?” Not: “Was that hard because of your vision?”

Record assistive technology output. For remote sessions, ask participants to share their screen with audio so you can hear the screen reader output. For in-person sessions, position a microphone to capture screen reader speech.

Remote vs in-person considerations

FactorRemoteIn-person
Participant comfortHigher (own environment and AT setup)Lower (unfamiliar space and equipment)
AT compatibilityDepends on video platformFull control over test environment
Observation qualityLimited view of physical interactionCan see posture, gestures, frustration cues
LogisticsEasier (no transportation barriers)Requires accessible venue, parking, restrooms
Best forScreen reader, magnification, voice control testingSwitch device, mobility-related, cognitive testing

Remote sessions are generally preferred for accessibility research because participants use their own equipment and environment. Use in-person sessions when you need to observe physical interaction with devices or when participants have limited internet access.

Differences between WCAG 2.1 and WCAG 2.2 that affect research

WCAG 2.2 criterionWhat changedImpact on user testing
Dragging movements (2.5.7)Drag-and-drop must have single-pointer alternativeTest drag interactions with keyboard-only and switch users
Target size minimum (2.5.8)Interactive targets must be at least 24x24 CSS pixelsMeasure tap/click targets during motor impairment testing
Consistent help (3.2.6)Help mechanisms must appear in same location across pagesTrack whether AT users can reliably find help
Redundant entry (3.3.7)Do not force re-entry of previously provided infoTest multi-step forms with cognitive disability participants
Accessible authentication (3.3.8)No cognitive function test for login (no CAPTCHA puzzles)Test login flows with screen reader and cognitive disability participants

If your organization targets WCAG 2.2 compliance, your usability testing tasks must specifically cover these new criteria.

Frequently asked questions

Do I need separate accessibility testing or can I include it in regular usability testing?

Both. Include participants with disabilities in your regular usability studies so accessibility is part of every round of testing. Run dedicated accessibility audits (automated + expert + user testing) at major milestones. The regular studies catch integration issues. The dedicated audits catch systematic compliance gaps.

How many participants with disabilities do I need per study?

Three to five per assistive technology type per round. If you are testing screen reader accessibility, recruit 3-5 screen reader users. If also testing motor accessibility, add 3-5 keyboard-only or switch users. One participant using one AT is not sufficient because AT usage patterns vary significantly between individuals.

What if my prototype is not accessible enough to test with AT users?

Fix the blocking issues first. At minimum, ensure keyboard navigation works and basic screen reader compatibility exists (heading structure, form labels, alt text). If the prototype is too broken for AT users, run an expert audit first, fix critical issues, then recruit. Testing a severely inaccessible prototype frustrates participants and produces findings you already know.

Is WCAG 2.1 AA still sufficient or should we target WCAG 2.2?

WCAG 2.1 AA remains the legal standard referenced by Section 508 and most court rulings. However, WCAG 2.2 adds criteria that significantly improve the experience for users with motor and cognitive disabilities. When working in government and public sector contexts, target 2.1 AA as the compliance floor and 2.2 AA as the quality target.