Clinical UX research best practices: a guide for healthcare product teams

Best practices for conducting UX research in clinical healthcare settings. Covers EHR usability testing, clinical workflow observation, FDA digital health guidelines, recruiting clinicians, and methods that reduce documentation burden by 25-40%.

Clinical UX research best practices: a guide for healthcare product teams

Clinical UX research studies how clinicians, nurses, pharmacists, and clinical staff interact with software in healthcare delivery settings: EHRs, clinical decision support, medication administration systems, radiology viewers, nursing documentation, and point-of-care tools. The users are healthcare professionals working under time pressure, cognitive overload, and life-or-death consequences. The environment is a hospital, clinic, or care facility where interruptions happen every 3-4 minutes and a usability failure can become a patient safety event.

Research consistently shows that clinical UX improvements produce measurable outcomes: 25-40% reduction in documentation time (AMIA), decreased order entry errors, reduced alert fatigue, and lower clinician burnout. The FDA’s Digital Health Center of Excellence recognizes that software usability directly affects patient safety, and the ONC’s usability guidance positions provider burden reduction as a national health IT priority.

This guide covers how UX teams conduct effective research in clinical environments while navigating the time, access, privacy, and regulatory constraints unique to healthcare delivery settings.

For pharmaceutical software compliance (FDA human factors, IRB, 21 CFR Part 11), see our pharma compliance guide. For patient experience research (patients as users), see our patient experience guide. For HIPAA compliance specifics, see our HIPAA-compliant research guide.

Key takeaways

  • Clinical UX research must happen in or near clinical environments. Lab-based testing with clinicians misses the interruptions, multitasking, and environmental noise that define real clinical workflows
  • Clinicians have 30-45 minutes maximum for research sessions. Design protocols that capture critical data within this window. Do not plan 90-minute sessions with physicians
  • EHR usability is the dominant clinical UX research topic because clinicians spend 1-2 hours per day on documentation. Research that reduces this burden has the highest organizational buy-in
  • The FDA Digital Health Center of Excellence provides guidance on software usability for clinical tools. Aligning your research with FDA expectations strengthens both the product and the regulatory position
  • Clinical UX research produces ROI that healthcare organizations measure: fewer medical errors, reduced documentation time, lower burnout scores, and improved clinician satisfaction. Quantify these outcomes in your research reporting

What makes clinical UX research different?

Six factors distinguish clinical UX research from other product research.

1. The environment is hostile to research. Clinical settings are loud, bright, crowded, and interrupt-driven. Clinicians are paged, called, and physically approached every 3-4 minutes during patient care. Your research must work within this reality, not fight against it.

2. Time is the scarcest resource. Physicians have 10-15 minutes between patients. Nurses document during care delivery while multitasking. No one has a free hour. Research sessions must be 30-45 minutes maximum, often broken into 15-minute blocks.

3. Safety stakes elevate everything. A usability error in a consumer app causes frustration. A usability error in a medication administration system can cause patient harm. Research must prioritize safety-critical workflows and document use errors with the rigor that patient safety demands.

4. Regulatory alignment adds value. The FDA, ONC, and AHRQ all recognize EHR usability as a patient safety issue. Research that aligns with these frameworks strengthens both the product and the regulatory position.

5. The user is an expert with deep domain knowledge. Clinicians are not confused by complexity. They are frustrated by unnecessary complexity. A radiologist who reads 50 studies per day does not need a simple interface. They need an efficient one that matches their mental model and workflow speed.

6. Change resistance is evidence-based. Clinicians resist workflow changes not because they are technophobic but because they have learned, often through patient safety events, that untested changes create risk. Research must earn clinical trust by demonstrating rigorous methodology and a commitment to safety.

Which research methods work in clinical settings?

MethodClinical adaptationSession lengthBest for
Clinical shadowingObserve during actual patient care. Stand back, do not interact unless invited. Note workflow patterns, software interactions, workarounds2-4 hours per shiftUnderstanding real clinical workflows and where software fits (or fails to fit)
Simulation-based usability testingTest in a simulation lab or mock clinical environment with realistic patient scenarios and clinical data30-45 minutesTesting safety-critical workflows (medication ordering, alert response) without risking patients
Rapid usability testing15-20 minute sessions between patients or during quiet periods. 3-5 focused tasks maximum15-20 minutesTesting specific features or screens when longer sessions are impossible
Think-aloud with cliniciansModified protocol: clinicians narrate while performing realistic clinical tasks on the software30-45 minutesUnderstanding clinical reasoning during software interaction
Contextual inquiry at the bedsideObserve clinicians using software during actual patient care, with clinician and patient consent1-2 hoursSeeing how software is used at the point of care, including workarounds
Shift-end interviews10-15 minute interviews immediately after a shift, when experiences are fresh10-15 minutesCapturing friction points from the entire shift in a brief window
Heuristic evaluation (clinical)Expert evaluation against clinical usability standards (ONC guidelines, NIST usability frameworks)No clinician time requiredQuick assessment of clinical interfaces against established standards
Clinical diary studiesClinicians log software frustrations and workarounds via brief entries (text or voice) during or after shifts1-2 weeksTracking cumulative friction across multiple shifts and patient encounters
Eye tracking in clinical simulationTrack gaze patterns on clinical displays (monitors, dashboards, alert pop-ups) during simulated tasks30-45 minutesUnderstanding information hierarchy on dense clinical screens
Alert response testingPresent clinical alerts (drug interactions, critical results, dosing warnings) and measure response time, comprehension, and action15-30 minutesEvaluating alert design, reducing alert fatigue

How to conduct clinical shadowing

Clinical shadowing (observing clinicians during actual patient care) is the highest-value clinical UX method because it reveals the real workflow that no simulation can replicate.

Getting access

Step 1: Champion identification. Find a clinical informaticist, CMIO (Chief Medical Information Officer), or nursing informatics lead who sponsors your research. Without a clinical champion, you will not get floor access.

Step 2: Compliance approval. Complete:

  • Hospital IRB or quality improvement determination (is this research or QI?)
  • HIPAA training certification
  • Facility-specific orientation and badge
  • Background check (required by many facilities)
  • Flu vaccination and health screening (required for patient care areas)

Step 3: Unit coordination. Work with the charge nurse or unit manager to schedule observation during appropriate times. Avoid: active codes, procedures, shift change chaos, and times when the unit is short-staffed.

Timeline: 4-8 weeks from first contact to first observation day.

During shadowing (2-4 hours)

What to observe:

Focus areaWhat to watch forHow to document
Software interaction frequencyHow many times does the clinician interact with the EHR per patient? How long is each interaction?Tally marks per interaction, approximate duration
Multitasking patternsDoes the clinician document while caring for the patient, or batch documentation later?Note documentation timing relative to patient encounters
WorkaroundsPaper notes, sticky notes on monitors, personal spreadsheets, or verbal relays that substitute for software functionsPhotograph (with permission) or sketch workarounds
Interruption impactWhat happens when a clinician is interrupted mid-task in the software? Can they resume or do they restart?Count interruptions and note recovery behavior
Cross-system switchingHow many software systems does the clinician use during a single patient encounter?List systems and note switching frequency
Physical environmentScreen placement, lighting, noise, number of monitors, shared vs. individual workstationsSketch the workstation layout
Alert responseHow does the clinician respond to alerts? Read and act, read and override, or dismiss without reading?Categorize alert responses: act / override / dismiss

Do not:

  • Distract the clinician during patient care
  • Touch any equipment or software
  • Look at patient-identifiable information unless your IRB protocol permits it
  • Photograph patient information (even accidentally)
  • Interfere with any clinical workflow

Post-shadowing debrief (15 minutes)

After the observation, ask 3-5 focused questions:

  • “I noticed you [specific observation]. Can you explain what was happening?”
  • “Where in your workflow do you feel the software slows you down the most?”
  • “Are there any tasks where you have built a workaround because the software does not support what you need?”

How to test EHR usability

EHR usability testing is the most common and most impactful clinical UX research. Clinicians spend an average of 1-2 hours per day on EHR documentation (Annals of Internal Medicine), and usability improvements can reduce this by 25-40%.

EHR testing scenarios

ScenarioWhat it testsSafety criticalityKey metrics
”Place an order for [medication] for this patient”CPOE workflow, drug search, dose selection, interaction alertsHigh (medication errors are the #1 EHR safety concern)Time to complete, errors, alert response
”Document this patient encounter”Note entry, template usage, problem list update, ordersMedium (documentation quality affects care continuity)Time, completeness, cognitive load (NASA-TLX)
“Review this patient’s lab results and take appropriate action”Results review, critical result identification, follow-up orderingHigh (missed critical results are a patient safety issue)Time to identify critical result, action accuracy
”Respond to this clinical alert”Alert design, comprehension, appropriate responseHigh (alert fatigue is a major patient safety concern)Response time, override rate, comprehension accuracy
”Hand off this patient’s care to the incoming team”Handoff documentation, summary generation, pending ordersHigh (handoff failures are a leading cause of adverse events)Information completeness, time, receiving team comprehension
”Find the information you need to answer a patient’s question about their medication”Information retrieval, medication list navigation, patient-facing informationMediumTime to find, information accuracy

Alert fatigue testing

Clinical alert fatigue, where clinicians override or dismiss alerts because there are too many, is one of the most critical clinical UX problems. Research shows that 72-96% of clinical alerts are overridden (JAMIA), meaning the alert system has effectively stopped functioning.

Alert testing protocol:

  1. Present 15-20 alerts during a simulated clinical workflow (mix of clinically significant and non-significant)
  2. Measure: override rate, time to respond, comprehension of alert content, and action taken
  3. Compare: do clinicians treat high-severity alerts differently from low-severity? If not, the severity system has failed
  4. Ask: “Which of these alerts would you actually change your clinical decision for?” The answer reveals which alerts have clinical value and which are noise

Alert fatigue metrics:

MetricCurrent state (industry average)Target after redesign
Alert override rate72-96%<50% (indicates alerts are more clinically relevant)
Time to dismiss non-critical alert<2 seconds (dismissed without reading)>5 seconds (indicates reading before acting)
Critical alert response accuracy60-70% appropriate response>90% appropriate response
Alerts per prescriber per day50-100+<20 (after rationalization)

How to adapt for the FDA digital health framework

The FDA’s Digital Health Center of Excellence provides guidance that affects clinical software UX research. Key frameworks:

FDA digital health categories relevant to clinical UX

CategoryExamplesUX research implication
Software as a Medical Device (SaMD)Clinical decision support that provides diagnoses or treatment recommendationsRequires formative and summative usability testing per IEC 62366-1
Clinical Decision Support (CDS)Drug interaction checkers, dosing calculators, diagnostic aidsFDA evaluates based on risk: higher-risk CDS requires more rigorous usability evidence
Digital therapeuticsSoftware that delivers evidence-based therapeutic interventionsRequires clinical evidence of efficacy, which includes usability as a component of treatment delivery
Remote patient monitoringTools clinicians use to monitor patients outside the facilityUsability research must cover both the clinician dashboard and the patient device
EHR modulesONC-certified EHR features (e-prescribing, clinical notes, lab review)Must meet ONC usability requirements for certification

Aligning UX research with FDA expectations

When your clinical software may be classified as SaMD or regulated CDS, align your research with FDA human factors expectations:

  • Conduct use-related risk analysis (URRA) before designing test scenarios
  • Document all use errors, close calls, and difficulties with root cause analysis
  • Test with representative users in representative use environments
  • Maintain traceability from observed problem to design change to verification
  • Distinguish between formative (iterative) and summative (validation) studies

This alignment is not just regulatory compliance. It produces better research because the FDA framework forces systematic attention to safety-critical interactions.

How to recruit clinicians for research

Role segmentation

RoleAvailabilityBest session formatIncentive range
Physician (attending)Extremely limited. 15-30 min maxRapid testing between patients, shift-end interview$300-500/hr
Resident / fellowLimited but more flexible than attendings30-45 min during academic time or between rotations$150-250/hr
Nurse (bedside)Limited during shift. Available before/after shiftRapid testing during quiet periods, diary study during shift$125-200/hr
Nurse practitioner / PAModerate availability30-45 min sessions, shift-end interviews$175-275/hr
PharmacistModerate availability (clinical pharmacy)30-45 min sessions, medication workflow observation$150-250/hr
Clinical informaticistMost available of clinical roles45-60 min sessions, extended observation$150-250/hr
Medical assistant / techAvailable during breaks or slow periods15-20 min rapid sessions$75-125/hr

Where to find clinician participants

  • Hospital partnerships. The clinical informatics team or CMIO office is your entry point. They can identify willing clinicians across departments
  • Clinical societies. AMA, ANA (nursing), ASHP (pharmacy), AMIA (informatics), specialty-specific societies
  • LinkedIn targeting. Search by clinical title + institution type + specialty
  • CleverX verified B2B panels. Pre-screened healthcare professionals filtered by role, specialty, and EHR experience
  • Medical conferences. HIMSS, AMIA Annual Symposium, clinical specialty conferences

Scheduling considerations

  • Never schedule during patient care hours without explicit unit approval and patient safety assurance
  • Academic medical centers have protected academic time (usually half a day per week) when physicians are available for non-clinical activities
  • Nursing shifts have a 30-minute overlap during handoff that, with charge nurse approval, can accommodate rapid research
  • Early morning (6-7am before clinic starts) and late afternoon (4-5pm after clinic) are the most common clinician research windows
  • Over-recruit by 40%. Clinical emergencies cause last-minute cancellations at higher rates than any other B2B segment

Screening questions

  1. What is your primary clinical role and setting? (Open text. Identifies role, specialty, and care environment)
  2. Which clinical software systems do you use daily? (Open text. Filters for EHR experience: Epic, Cerner, Meditech, Allscripts, etc.)
  3. How many hours per day do you spend on clinical documentation? (Range. Indicates documentation burden)
  4. Describe a recent moment when your clinical software frustrated you. (Articulation check)
  5. How many years in clinical practice? (Range. Segments by experience)

Clinical UX research metrics

MetricWhat it measuresHow to captureTarget
Documentation time per encounterHow long clinicians spend documenting in the EHR per patientTime observation or EHR log dataDecreasing after design changes (25-40% reduction is achievable)
Order entry error rateMistakes during medication ordering, lab ordering, referral placementSimulation testing with known correct orders<2% for medication orders
Alert override ratePercentage of clinical alerts dismissed without actionEHR audit log data<50% (down from 72-96% baseline)
Time to critical informationHow quickly a clinician finds a specific piece of clinical dataTimed information retrieval task<30 seconds for common lookups
Clinical workflow interruption recoveryCan clinicians resume a task after an interruption without restarting?Observation during simulated interrupted workflows>85% successful resumption
System Usability Scale (SUS)Overall usability perceptionPost-session SUS questionnaire>68 (industry average for clinical software is 45-60)
NASA-TLX cognitive loadMental workload during clinical tasksPost-task NASA-TLX assessmentDecreasing between design iterations
Clinician satisfaction (burnout proxy)Overall satisfaction with clinical toolsAnnual survey, correlated with burnout measuresIncreasing year over year

Common clinical UX research findings

Documentation is the #1 pain point. In every clinical UX study, documentation burden dominates. Clinicians spend more time on the EHR than with patients (Annals of Internal Medicine). Any research finding that reduces documentation time has immediate organizational buy-in.

Workarounds are everywhere. Clinicians build elaborate workarounds: paper lists taped to workstations, personal spreadsheets of patient data, verbal handoffs that bypass the software, and copy-paste documentation templates that propagate outdated information. Each workaround is a product gap.

Alert fatigue is universal. Clinical alert systems have cried wolf so many times that clinicians override 72-96% of alerts without reading them. The most dangerous consequence: genuinely critical alerts get dismissed along with the noise.

Inter-system friction wastes hours per day. Clinicians switch between 5-10 clinical systems per shift (EHR, PACS, pharmacy, lab, messaging, scheduling). The switching cost is not just time. It is cognitive load and error risk at every transition.

Mobile is coming but not there yet. Clinicians want mobile access for rounding, but current mobile EHR experiences are poor. Research reveals specific mobile needs: read-only access for reference, quick documentation capture, and communication with the care team.

Frequently asked questions

How is clinical UX research different from patient experience research?

Patient experience research studies how patients interact with healthcare products (portals, apps, telehealth). Clinical UX research studies how clinicians interact with clinical software during care delivery (EHR, CPOE, CDS). Different users, different environments, different success criteria. Patient research asks “Can patients manage their health?” Clinical research asks “Can clinicians deliver care efficiently and safely?”

Do you need IRB approval for clinical UX research?

If your research involves observing patient care (even indirectly) or if findings will be used for regulatory submission, IRB review is required or strongly recommended. Many clinical UX studies qualify as quality improvement (QI) rather than research, which may have a lighter review process. Submit a QI vs. research determination to your institution’s IRB before starting. See our pharma compliance guide for detailed IRB guidance.

How do you research clinical software without disrupting patient care?

Three approaches: (1) Observe during actual care from a non-disruptive position (shadowing). (2) Test in a clinical simulation lab that replicates the environment without real patients. (3) Conduct rapid sessions during scheduled downtimes, breaks, or academic time. Never ask clinicians to pause patient care for your research. The research must fit around the clinical workflow, not the other way around.

What is the most impactful clinical UX research investment?

EHR documentation workflow research. It affects every clinician, every day, for every patient. Improvements in documentation efficiency (reducing clicks, improving templates, enabling voice input, streamlining note generation) produce measurable time savings that multiply across thousands of encounters. If you can only do one clinical UX study, study documentation.

How do you measure the ROI of clinical UX research?

Measure before and after: documentation time per encounter (minutes saved x encounters per day x days per year = hours returned to patient care), order entry errors (errors prevented x cost per error), alert appropriateness (override rate reduction), and clinician satisfaction scores. Healthcare organizations respond to quantified outcomes. “We reduced documentation time by 8 minutes per encounter across 200 physicians” translates to approximately 5,000 hours of physician time returned to patient care annually.