Clinical UX research best practices: a guide for healthcare product teams
Best practices for conducting UX research in clinical healthcare settings. Covers EHR usability testing, clinical workflow observation, FDA digital health guidelines, recruiting clinicians, and methods that reduce documentation burden by 25-40%.
Clinical UX research studies how clinicians, nurses, pharmacists, and clinical staff interact with software in healthcare delivery settings: EHRs, clinical decision support, medication administration systems, radiology viewers, nursing documentation, and point-of-care tools. The users are healthcare professionals working under time pressure, cognitive overload, and life-or-death consequences. The environment is a hospital, clinic, or care facility where interruptions happen every 3-4 minutes and a usability failure can become a patient safety event.
Research consistently shows that clinical UX improvements produce measurable outcomes: 25-40% reduction in documentation time (AMIA), decreased order entry errors, reduced alert fatigue, and lower clinician burnout. The FDA’s Digital Health Center of Excellence recognizes that software usability directly affects patient safety, and the ONC’s usability guidance positions provider burden reduction as a national health IT priority.
This guide covers how UX teams conduct effective research in clinical environments while navigating the time, access, privacy, and regulatory constraints unique to healthcare delivery settings.
For pharmaceutical software compliance (FDA human factors, IRB, 21 CFR Part 11), see our pharma compliance guide. For patient experience research (patients as users), see our patient experience guide. For HIPAA compliance specifics, see our HIPAA-compliant research guide.
Key takeaways
- Clinical UX research must happen in or near clinical environments. Lab-based testing with clinicians misses the interruptions, multitasking, and environmental noise that define real clinical workflows
- Clinicians have 30-45 minutes maximum for research sessions. Design protocols that capture critical data within this window. Do not plan 90-minute sessions with physicians
- EHR usability is the dominant clinical UX research topic because clinicians spend 1-2 hours per day on documentation. Research that reduces this burden has the highest organizational buy-in
- The FDA Digital Health Center of Excellence provides guidance on software usability for clinical tools. Aligning your research with FDA expectations strengthens both the product and the regulatory position
- Clinical UX research produces ROI that healthcare organizations measure: fewer medical errors, reduced documentation time, lower burnout scores, and improved clinician satisfaction. Quantify these outcomes in your research reporting
What makes clinical UX research different?
Six factors distinguish clinical UX research from other product research.
1. The environment is hostile to research. Clinical settings are loud, bright, crowded, and interrupt-driven. Clinicians are paged, called, and physically approached every 3-4 minutes during patient care. Your research must work within this reality, not fight against it.
2. Time is the scarcest resource. Physicians have 10-15 minutes between patients. Nurses document during care delivery while multitasking. No one has a free hour. Research sessions must be 30-45 minutes maximum, often broken into 15-minute blocks.
3. Safety stakes elevate everything. A usability error in a consumer app causes frustration. A usability error in a medication administration system can cause patient harm. Research must prioritize safety-critical workflows and document use errors with the rigor that patient safety demands.
4. Regulatory alignment adds value. The FDA, ONC, and AHRQ all recognize EHR usability as a patient safety issue. Research that aligns with these frameworks strengthens both the product and the regulatory position.
5. The user is an expert with deep domain knowledge. Clinicians are not confused by complexity. They are frustrated by unnecessary complexity. A radiologist who reads 50 studies per day does not need a simple interface. They need an efficient one that matches their mental model and workflow speed.
6. Change resistance is evidence-based. Clinicians resist workflow changes not because they are technophobic but because they have learned, often through patient safety events, that untested changes create risk. Research must earn clinical trust by demonstrating rigorous methodology and a commitment to safety.
Which research methods work in clinical settings?
| Method | Clinical adaptation | Session length | Best for |
|---|---|---|---|
| Clinical shadowing | Observe during actual patient care. Stand back, do not interact unless invited. Note workflow patterns, software interactions, workarounds | 2-4 hours per shift | Understanding real clinical workflows and where software fits (or fails to fit) |
| Simulation-based usability testing | Test in a simulation lab or mock clinical environment with realistic patient scenarios and clinical data | 30-45 minutes | Testing safety-critical workflows (medication ordering, alert response) without risking patients |
| Rapid usability testing | 15-20 minute sessions between patients or during quiet periods. 3-5 focused tasks maximum | 15-20 minutes | Testing specific features or screens when longer sessions are impossible |
| Think-aloud with clinicians | Modified protocol: clinicians narrate while performing realistic clinical tasks on the software | 30-45 minutes | Understanding clinical reasoning during software interaction |
| Contextual inquiry at the bedside | Observe clinicians using software during actual patient care, with clinician and patient consent | 1-2 hours | Seeing how software is used at the point of care, including workarounds |
| Shift-end interviews | 10-15 minute interviews immediately after a shift, when experiences are fresh | 10-15 minutes | Capturing friction points from the entire shift in a brief window |
| Heuristic evaluation (clinical) | Expert evaluation against clinical usability standards (ONC guidelines, NIST usability frameworks) | No clinician time required | Quick assessment of clinical interfaces against established standards |
| Clinical diary studies | Clinicians log software frustrations and workarounds via brief entries (text or voice) during or after shifts | 1-2 weeks | Tracking cumulative friction across multiple shifts and patient encounters |
| Eye tracking in clinical simulation | Track gaze patterns on clinical displays (monitors, dashboards, alert pop-ups) during simulated tasks | 30-45 minutes | Understanding information hierarchy on dense clinical screens |
| Alert response testing | Present clinical alerts (drug interactions, critical results, dosing warnings) and measure response time, comprehension, and action | 15-30 minutes | Evaluating alert design, reducing alert fatigue |
How to conduct clinical shadowing
Clinical shadowing (observing clinicians during actual patient care) is the highest-value clinical UX method because it reveals the real workflow that no simulation can replicate.
Getting access
Step 1: Champion identification. Find a clinical informaticist, CMIO (Chief Medical Information Officer), or nursing informatics lead who sponsors your research. Without a clinical champion, you will not get floor access.
Step 2: Compliance approval. Complete:
- Hospital IRB or quality improvement determination (is this research or QI?)
- HIPAA training certification
- Facility-specific orientation and badge
- Background check (required by many facilities)
- Flu vaccination and health screening (required for patient care areas)
Step 3: Unit coordination. Work with the charge nurse or unit manager to schedule observation during appropriate times. Avoid: active codes, procedures, shift change chaos, and times when the unit is short-staffed.
Timeline: 4-8 weeks from first contact to first observation day.
During shadowing (2-4 hours)
What to observe:
| Focus area | What to watch for | How to document |
|---|---|---|
| Software interaction frequency | How many times does the clinician interact with the EHR per patient? How long is each interaction? | Tally marks per interaction, approximate duration |
| Multitasking patterns | Does the clinician document while caring for the patient, or batch documentation later? | Note documentation timing relative to patient encounters |
| Workarounds | Paper notes, sticky notes on monitors, personal spreadsheets, or verbal relays that substitute for software functions | Photograph (with permission) or sketch workarounds |
| Interruption impact | What happens when a clinician is interrupted mid-task in the software? Can they resume or do they restart? | Count interruptions and note recovery behavior |
| Cross-system switching | How many software systems does the clinician use during a single patient encounter? | List systems and note switching frequency |
| Physical environment | Screen placement, lighting, noise, number of monitors, shared vs. individual workstations | Sketch the workstation layout |
| Alert response | How does the clinician respond to alerts? Read and act, read and override, or dismiss without reading? | Categorize alert responses: act / override / dismiss |
Do not:
- Distract the clinician during patient care
- Touch any equipment or software
- Look at patient-identifiable information unless your IRB protocol permits it
- Photograph patient information (even accidentally)
- Interfere with any clinical workflow
Post-shadowing debrief (15 minutes)
After the observation, ask 3-5 focused questions:
- “I noticed you [specific observation]. Can you explain what was happening?”
- “Where in your workflow do you feel the software slows you down the most?”
- “Are there any tasks where you have built a workaround because the software does not support what you need?”
How to test EHR usability
EHR usability testing is the most common and most impactful clinical UX research. Clinicians spend an average of 1-2 hours per day on EHR documentation (Annals of Internal Medicine), and usability improvements can reduce this by 25-40%.
EHR testing scenarios
| Scenario | What it tests | Safety criticality | Key metrics |
|---|---|---|---|
| ”Place an order for [medication] for this patient” | CPOE workflow, drug search, dose selection, interaction alerts | High (medication errors are the #1 EHR safety concern) | Time to complete, errors, alert response |
| ”Document this patient encounter” | Note entry, template usage, problem list update, orders | Medium (documentation quality affects care continuity) | Time, completeness, cognitive load (NASA-TLX) |
| “Review this patient’s lab results and take appropriate action” | Results review, critical result identification, follow-up ordering | High (missed critical results are a patient safety issue) | Time to identify critical result, action accuracy |
| ”Respond to this clinical alert” | Alert design, comprehension, appropriate response | High (alert fatigue is a major patient safety concern) | Response time, override rate, comprehension accuracy |
| ”Hand off this patient’s care to the incoming team” | Handoff documentation, summary generation, pending orders | High (handoff failures are a leading cause of adverse events) | Information completeness, time, receiving team comprehension |
| ”Find the information you need to answer a patient’s question about their medication” | Information retrieval, medication list navigation, patient-facing information | Medium | Time to find, information accuracy |
Alert fatigue testing
Clinical alert fatigue, where clinicians override or dismiss alerts because there are too many, is one of the most critical clinical UX problems. Research shows that 72-96% of clinical alerts are overridden (JAMIA), meaning the alert system has effectively stopped functioning.
Alert testing protocol:
- Present 15-20 alerts during a simulated clinical workflow (mix of clinically significant and non-significant)
- Measure: override rate, time to respond, comprehension of alert content, and action taken
- Compare: do clinicians treat high-severity alerts differently from low-severity? If not, the severity system has failed
- Ask: “Which of these alerts would you actually change your clinical decision for?” The answer reveals which alerts have clinical value and which are noise
Alert fatigue metrics:
| Metric | Current state (industry average) | Target after redesign |
|---|---|---|
| Alert override rate | 72-96% | <50% (indicates alerts are more clinically relevant) |
| Time to dismiss non-critical alert | <2 seconds (dismissed without reading) | >5 seconds (indicates reading before acting) |
| Critical alert response accuracy | 60-70% appropriate response | >90% appropriate response |
| Alerts per prescriber per day | 50-100+ | <20 (after rationalization) |
How to adapt for the FDA digital health framework
The FDA’s Digital Health Center of Excellence provides guidance that affects clinical software UX research. Key frameworks:
FDA digital health categories relevant to clinical UX
| Category | Examples | UX research implication |
|---|---|---|
| Software as a Medical Device (SaMD) | Clinical decision support that provides diagnoses or treatment recommendations | Requires formative and summative usability testing per IEC 62366-1 |
| Clinical Decision Support (CDS) | Drug interaction checkers, dosing calculators, diagnostic aids | FDA evaluates based on risk: higher-risk CDS requires more rigorous usability evidence |
| Digital therapeutics | Software that delivers evidence-based therapeutic interventions | Requires clinical evidence of efficacy, which includes usability as a component of treatment delivery |
| Remote patient monitoring | Tools clinicians use to monitor patients outside the facility | Usability research must cover both the clinician dashboard and the patient device |
| EHR modules | ONC-certified EHR features (e-prescribing, clinical notes, lab review) | Must meet ONC usability requirements for certification |
Aligning UX research with FDA expectations
When your clinical software may be classified as SaMD or regulated CDS, align your research with FDA human factors expectations:
- Conduct use-related risk analysis (URRA) before designing test scenarios
- Document all use errors, close calls, and difficulties with root cause analysis
- Test with representative users in representative use environments
- Maintain traceability from observed problem to design change to verification
- Distinguish between formative (iterative) and summative (validation) studies
This alignment is not just regulatory compliance. It produces better research because the FDA framework forces systematic attention to safety-critical interactions.
How to recruit clinicians for research
Role segmentation
| Role | Availability | Best session format | Incentive range |
|---|---|---|---|
| Physician (attending) | Extremely limited. 15-30 min max | Rapid testing between patients, shift-end interview | $300-500/hr |
| Resident / fellow | Limited but more flexible than attendings | 30-45 min during academic time or between rotations | $150-250/hr |
| Nurse (bedside) | Limited during shift. Available before/after shift | Rapid testing during quiet periods, diary study during shift | $125-200/hr |
| Nurse practitioner / PA | Moderate availability | 30-45 min sessions, shift-end interviews | $175-275/hr |
| Pharmacist | Moderate availability (clinical pharmacy) | 30-45 min sessions, medication workflow observation | $150-250/hr |
| Clinical informaticist | Most available of clinical roles | 45-60 min sessions, extended observation | $150-250/hr |
| Medical assistant / tech | Available during breaks or slow periods | 15-20 min rapid sessions | $75-125/hr |
Where to find clinician participants
- Hospital partnerships. The clinical informatics team or CMIO office is your entry point. They can identify willing clinicians across departments
- Clinical societies. AMA, ANA (nursing), ASHP (pharmacy), AMIA (informatics), specialty-specific societies
- LinkedIn targeting. Search by clinical title + institution type + specialty
- CleverX verified B2B panels. Pre-screened healthcare professionals filtered by role, specialty, and EHR experience
- Medical conferences. HIMSS, AMIA Annual Symposium, clinical specialty conferences
Scheduling considerations
- Never schedule during patient care hours without explicit unit approval and patient safety assurance
- Academic medical centers have protected academic time (usually half a day per week) when physicians are available for non-clinical activities
- Nursing shifts have a 30-minute overlap during handoff that, with charge nurse approval, can accommodate rapid research
- Early morning (6-7am before clinic starts) and late afternoon (4-5pm after clinic) are the most common clinician research windows
- Over-recruit by 40%. Clinical emergencies cause last-minute cancellations at higher rates than any other B2B segment
Screening questions
- What is your primary clinical role and setting? (Open text. Identifies role, specialty, and care environment)
- Which clinical software systems do you use daily? (Open text. Filters for EHR experience: Epic, Cerner, Meditech, Allscripts, etc.)
- How many hours per day do you spend on clinical documentation? (Range. Indicates documentation burden)
- Describe a recent moment when your clinical software frustrated you. (Articulation check)
- How many years in clinical practice? (Range. Segments by experience)
Clinical UX research metrics
| Metric | What it measures | How to capture | Target |
|---|---|---|---|
| Documentation time per encounter | How long clinicians spend documenting in the EHR per patient | Time observation or EHR log data | Decreasing after design changes (25-40% reduction is achievable) |
| Order entry error rate | Mistakes during medication ordering, lab ordering, referral placement | Simulation testing with known correct orders | <2% for medication orders |
| Alert override rate | Percentage of clinical alerts dismissed without action | EHR audit log data | <50% (down from 72-96% baseline) |
| Time to critical information | How quickly a clinician finds a specific piece of clinical data | Timed information retrieval task | <30 seconds for common lookups |
| Clinical workflow interruption recovery | Can clinicians resume a task after an interruption without restarting? | Observation during simulated interrupted workflows | >85% successful resumption |
| System Usability Scale (SUS) | Overall usability perception | Post-session SUS questionnaire | >68 (industry average for clinical software is 45-60) |
| NASA-TLX cognitive load | Mental workload during clinical tasks | Post-task NASA-TLX assessment | Decreasing between design iterations |
| Clinician satisfaction (burnout proxy) | Overall satisfaction with clinical tools | Annual survey, correlated with burnout measures | Increasing year over year |
Common clinical UX research findings
Documentation is the #1 pain point. In every clinical UX study, documentation burden dominates. Clinicians spend more time on the EHR than with patients (Annals of Internal Medicine). Any research finding that reduces documentation time has immediate organizational buy-in.
Workarounds are everywhere. Clinicians build elaborate workarounds: paper lists taped to workstations, personal spreadsheets of patient data, verbal handoffs that bypass the software, and copy-paste documentation templates that propagate outdated information. Each workaround is a product gap.
Alert fatigue is universal. Clinical alert systems have cried wolf so many times that clinicians override 72-96% of alerts without reading them. The most dangerous consequence: genuinely critical alerts get dismissed along with the noise.
Inter-system friction wastes hours per day. Clinicians switch between 5-10 clinical systems per shift (EHR, PACS, pharmacy, lab, messaging, scheduling). The switching cost is not just time. It is cognitive load and error risk at every transition.
Mobile is coming but not there yet. Clinicians want mobile access for rounding, but current mobile EHR experiences are poor. Research reveals specific mobile needs: read-only access for reference, quick documentation capture, and communication with the care team.
Frequently asked questions
How is clinical UX research different from patient experience research?
Patient experience research studies how patients interact with healthcare products (portals, apps, telehealth). Clinical UX research studies how clinicians interact with clinical software during care delivery (EHR, CPOE, CDS). Different users, different environments, different success criteria. Patient research asks “Can patients manage their health?” Clinical research asks “Can clinicians deliver care efficiently and safely?”
Do you need IRB approval for clinical UX research?
If your research involves observing patient care (even indirectly) or if findings will be used for regulatory submission, IRB review is required or strongly recommended. Many clinical UX studies qualify as quality improvement (QI) rather than research, which may have a lighter review process. Submit a QI vs. research determination to your institution’s IRB before starting. See our pharma compliance guide for detailed IRB guidance.
How do you research clinical software without disrupting patient care?
Three approaches: (1) Observe during actual care from a non-disruptive position (shadowing). (2) Test in a clinical simulation lab that replicates the environment without real patients. (3) Conduct rapid sessions during scheduled downtimes, breaks, or academic time. Never ask clinicians to pause patient care for your research. The research must fit around the clinical workflow, not the other way around.
What is the most impactful clinical UX research investment?
EHR documentation workflow research. It affects every clinician, every day, for every patient. Improvements in documentation efficiency (reducing clicks, improving templates, enabling voice input, streamlining note generation) produce measurable time savings that multiply across thousands of encounters. If you can only do one clinical UX study, study documentation.
How do you measure the ROI of clinical UX research?
Measure before and after: documentation time per encounter (minutes saved x encounters per day x days per year = hours returned to patient care), order entry errors (errors prevented x cost per error), alert appropriateness (override rate reduction), and clinician satisfaction scores. Healthcare organizations respond to quantified outcomes. “We reduced documentation time by 8 minutes per encounter across 200 physicians” translates to approximately 5,000 hours of physician time returned to patient care annually.