Industrial software user research: a complete guide for product and UX teams

How to conduct user research for industrial and manufacturing software. Covers methods for MES, SCADA, ERP, and plant floor systems. Includes factory floor observation, shift-based research, operator testing, and recruiting manufacturing professionals.

Industrial software user research: a complete guide for product and UX teams

What is industrial software user research?

Industrial software user research is the practice of studying how factory operators, process engineers, plant managers, maintenance technicians, and quality inspectors interact with manufacturing execution systems (MES), supervisory control and data acquisition (SCADA), enterprise resource planning (ERP), computerized maintenance management systems (CMMS), and other industrial software to improve usability, reduce errors, and increase operational efficiency in production environments.

It applies standard user research methods to a product category where the users wear safety goggles and gloves, the environment is loud and physically demanding, the screens may be viewed from 10 feet away, the consequences of a UI error can shut down a production line, and the software often runs on hardware that is 10-15 years old.

Industrial software research is fundamentally different from researching office software, consumer apps, or even other B2B products. The factory floor is not a desk. The operator’s context, shift-based work, physical constraints, noise, safety requirements, and zero tolerance for downtime, shapes every interaction in ways that lab-based or remote usability testing cannot replicate.

Key takeaways

  • Industrial software research must happen on the factory floor, not in a lab. The physical environment (noise, lighting, PPE, vibration, distance from screen) directly affects how users interact with software in ways that remote or lab testing cannot capture
  • Operators, engineers, and managers are three distinct user populations with fundamentally different workflows, tools, and success criteria. Research must segment by role, not by “manufacturing user”
  • Shift-based work creates research scheduling constraints that standard B2B research does not have. Research must accommodate day/night/weekend shifts and the handoff between them
  • Error consequences in industrial software are physical, not digital. A misread alarm, a misconfigured process parameter, or a missed maintenance alert can cause equipment damage, product defects, or safety incidents. Research must test error detection and recovery with the seriousness these stakes demand
  • Legacy system migration is the most common industrial UX research context. Users transitioning from a 15-year-old system to a modern platform carry deep muscle memory and resistance that research must understand and account for
  • Research consistently reveals 20-40% efficiency gains from dashboard reorganization, alarm rationalization, and navigation simplification in industrial software

Frequently asked questions

How is industrial software research different from other B2B research?

Six factors. Physical environment: users wear PPE (gloves, goggles, hard hats) that affect touch interaction, screen viewing is often from standing positions at varying distances, and ambient noise levels can exceed 85 dB, making think-aloud protocols difficult. Hardware constraints: industrial software runs on ruggedized tablets, panel PCs, and legacy terminals, not modern laptops with retina screens. Shift-based operations: research must cover day, night, and weekend shifts because usage patterns differ significantly across shifts. Error stakes: a UI error in office software causes frustration; a UI error in industrial software can cause equipment damage or safety incidents. Legacy dependency: users often have 10-20 years of muscle memory with the previous system. Union and safety regulations: research access to factory floors may require union approval, safety training, and PPE compliance for researchers.

What research methods work for industrial software?

On-site contextual inquiry during live operations (the highest-value method), adapted usability testing on ruggedized hardware with realistic production data, shift handoff observation, alarm response testing, interviews during shift breaks, heuristic evaluation against industrial display standards (ISA-101, EEMUA 191), system log analysis for error and alarm patterns, and NASA-TLX cognitive load assessment for complex operator tasks.

How do you get access to factory floors for research?

Partner with the plant’s operations or continuous improvement team. Frame the research as “operational efficiency improvement” not “UX research.” Provide evidence of ROI (20-40% efficiency gains from similar studies). Complete all required safety training and PPE certification before the first visit. Obtain union approval if applicable. Start with a single-shift pilot to build trust before expanding to multi-shift research.

How many participants do you need for industrial software research?

5-8 per role per shift for qualitative methods. Since you typically need to research at least 2 roles (operators and engineers) across at least 2 shifts (day and night), plan for 20-32 participants per study. For single-role, single-shift focused studies, 5-8 is sufficient. For alarm rationalization or dashboard reorganization projects, include data from system logs (no participant limit) alongside qualitative sessions.

Do you need safety training to conduct industrial research?

Yes. Most manufacturing facilities require all visitors to complete a site-specific safety orientation (30 minutes to 2 hours), wear appropriate PPE (hard hat, safety glasses, steel-toe boots, high-visibility vest, hearing protection as needed), and be escorted by a site employee. Some facilities require additional training for specific areas (confined spaces, chemical handling zones, clean rooms). Plan this into your research timeline. Your first visit may be primarily safety orientation.

How do you test industrial software when the production line cannot stop?

You do not stop the line. Research methods must work around continuous operations: observe during normal production (contextual inquiry), test during scheduled downtime or changeovers (usability testing), interview during shift breaks (10-15 minute sessions, not 45-minute sessions), and use simulation environments that mirror the production system for extended testing. Never ask a facility to stop or slow production for research. Frame your presence as non-disruptive.

Which research methods work for industrial software?

MethodBest forIndustrial adaptation
On-site contextual inquiryObserving real operations, alarm response, shift workflowsWear PPE. Observe from a safe distance. Do not distract operators during critical tasks. Record notes, not video (many plants prohibit cameras)
Adapted usability testingTesting new interfaces, dashboard layouts, alarm configurationsUse ruggedized hardware matching the target deployment. Test with gloves on. Test at realistic viewing distances (3-10 feet). Use production-realistic data volumes
Shift handoff observationUnderstanding information transfer between shiftsObserve the 15-30 minute overlap when one shift ends and another begins. This is where critical information is lost or miscommunicated
Alarm response testingEvaluating alarm prioritization, display clarity, and response workflowsSimulate alarm cascades in a test environment. Measure time to acknowledge, diagnose, and respond. Compare against ISA-18.2 alarm management standards
Interviews (shift-break format)Understanding pain points, workarounds, and unmet needsConduct during shift breaks (10-15 minutes, not 45). Bring the questions to the break room, not the operator to your conference room
Heuristic evaluationAuditing existing interfaces against industrial standardsEvaluate against ISA-101 (HMI design), EEMUA 191 (alarm management), and ISA-18.2 (alarm systems). These are the industrial equivalents of Nielsen’s heuristics
System log analysisIdentifying error patterns, alarm floods, and usage patternsAnalyze historian data, alarm logs, and operator action logs. This data exists in every industrial system but is rarely analyzed for UX insights
NASA-TLX cognitive load assessmentMeasuring mental workload during complex operator tasksAdminister after complex tasks (batch changeover, alarm response, quality inspection). Compare scores across interface variants
Diary studies (adapted)Tracking shift-to-shift experiences over 1-2 weeksSimple paper forms at the workstation: “Biggest frustration this shift?” “Time wasted on software issues?” Paper works better than apps in industrial settings
Surveys (shift-end format)Measuring satisfaction and priorities at scale3-5 questions administered at shift end via paper or kiosk. Keep under 2 minutes. Operators will not complete long surveys after an 8-12 hour shift

How to conduct factory floor contextual inquiry

Factory floor observation is the highest-value industrial research method. No other method reveals what actually happens during production.

Pre-visit preparation

Safety requirements:

  • Complete site safety orientation (schedule 1-2 weeks in advance)
  • Obtain and fit-test all required PPE
  • Review restricted areas and photography/recording policies
  • Identify a site escort who will accompany you during observation
  • Understand emergency procedures (evacuation routes, muster points, alarm signals)

Research preparation:

  • Review the production schedule to understand what operations will be running during your visit
  • Obtain system screenshots or recordings from the operations team so you recognize the interfaces
  • Prepare an observation guide focused on 3-5 specific workflows you want to observe
  • Bring paper note-taking materials (not a laptop, which is awkward to use while standing in PPE)

During observation (2-4 hours per shift)

First 30 minutes: Environment mapping

  • Walk the floor with your escort. Note workstation locations, screen positions, lighting conditions, noise levels, and traffic patterns
  • Identify where operators stand or sit relative to their screens (viewing distance affects every design decision)
  • Note which screens are shared vs. individual, touchscreen vs. keyboard/mouse, and fixed vs. mobile

Next 2-3 hours: Workflow observation

What to observeWhy it mattersHow to capture
Operator screen scanning patternsReveals what information operators check routinely vs. on-demandNote the sequence of screens/pages checked and frequency
Alarm response behaviorReveals whether alarms are acted on, acknowledged-and-ignored, or missed entirelyNote alarm type, operator response, time to act
Workaround behaviorsReveals where the software fails and operators compensateWatch for: paper notes taped to screens, Excel spreadsheets running alongside the main system, manual calculations
Inter-system navigationReveals how many systems operators switch between for a single taskCount system switches per task and note what triggers each switch
Communication patternsReveals information flow between operators, engineers, and supervisorsNote when operators call for help, who they call, and what information they need
Physical interaction challengesReveals ergonomic and environmental UX issuesNote: gloved touch accuracy, screen visibility in different lighting, noise interference with voice communication

Final 30 minutes: Quick debrief

  • Ask 3-5 focused questions based on what you observed: “I noticed you switched to the spreadsheet during that changeover. What were you looking for that the system does not show?”
  • Keep it brief. The operator has work to return to

Multi-shift observation

Observe at least 2 different shifts. Production patterns, staffing levels, and operator experience vary significantly between day and night shifts. Night shifts often have fewer operators, less supervisory oversight, and different alarm patterns. The software experience on a night shift can be fundamentally different from a day shift.

How to test industrial software interfaces

Hardware-authentic testing

Never test industrial software on a modern laptop in a conference room. The results do not translate to the factory floor.

Test environment requirements:

FactorConference room testFactory-authentic test
Screen size and resolution15” laptop, retina display19-24” panel PC, 1080p or lower
Input methodMouse and keyboard, clean handsTouchscreen with gloves, membrane keyboard
Viewing distance2 feet (seated at desk)3-10 feet (standing at workstation)
LightingOffice fluorescentVariable: bright overhead + screen glare, or dim monitoring room
NoiseQuiet office75-90 dB ambient noise
Distraction levelControlled, focusedFrequent interruptions from production events
Duration45-minute session10-15 minutes between production tasks

Glove testing

If operators wear gloves (common in manufacturing, chemical, food processing), every touch interaction must be tested with gloves:

  • Touch target size: minimum 44px for bare fingers becomes 60-80px for gloved hands
  • Swipe and gesture accuracy: degrades significantly with work gloves
  • Multi-touch: often unusable with industrial gloves
  • Stylus alternatives: some facilities use industrial styluses as a workaround

Viewing distance testing

Test every screen at the actual viewing distance operators will use:

  • Control room operators: 2-4 feet (seated, multiple screens)
  • Floor operators: 3-6 feet (standing, single screen)
  • Overview displays: 6-15 feet (wall-mounted, glanced at while passing)
  • Mobile/tablet: 12-18 inches (handheld, but often in one hand while the other operates equipment)

Font sizes, color differentiation, and alarm indicators that work at 2 feet can be completely unreadable at 6 feet. Test at actual distance.

How to research alarm management

Alarm management is the most critical UX challenge in industrial software. EEMUA 191 recommends a maximum of one alarm per operator per 10 minutes during normal operations. Many industrial systems exceed 10 alarms per minute during upset conditions, creating alarm floods where operators cannot distinguish critical from routine.

Alarm research protocol

System log analysis (before any testing):

  • Pull 3-6 months of alarm history from the system historian
  • Identify: total alarm count per shift, peak alarm rates, most frequent alarms, alarms that are always acknowledged but never acted on (nuisance alarms), and alarms that precede safety incidents
  • This data alone often reveals that 60-80% of alarms are nuisance alarms that operators have learned to ignore

Alarm response observation:

  • During contextual inquiry, observe how operators respond to alarms: acknowledge immediately, read then acknowledge, investigate then acknowledge, or acknowledge without reading
  • Measure: time from alarm to acknowledgment, time from acknowledgment to corrective action, alarms that are acknowledged but not acted on
  • Note: alarm shelving (suppressing alarms temporarily) and permanent alarm suppression (alarms disabled because they are not useful)

Alarm prioritization testing:

  • Present operators with a simulated alarm cascade (10-20 alarms in 2 minutes) and observe their triage strategy
  • Ask: “Which alarm do you respond to first? How do you decide priority?”
  • Compare their triage strategy to the system’s priority assignment. Mismatches indicate that the alarm priority scheme does not match operator mental models

Alarm management metrics

MetricIndustrial standardWhat it reveals
Alarm rate (normal operations)<1 per 10 minutes (EEMUA 191)Whether the alarm system is manageable
Nuisance alarm percentage<5% of total alarmsHow much of the alarm load is noise
Standing alarm count<10 at any timeWhether unresolved alarms accumulate
Time to acknowledge<5 minutes for non-criticalWhether operators are overwhelmed
Alarm flood frequency<1 per weekWhether upset conditions create unmanageable cascades

How to research shift handoffs

The shift handoff, the 15-30 minute overlap when one shift ends and another begins, is where critical production information is transferred (or lost). Industrial software often plays a key role in this handoff, and research reveals whether it supports or hinders the transfer.

Shift handoff observation protocol

Observe 3-5 handoffs across different shifts and days. For each:

  • What information does the outgoing operator share? (Verbally, via written log, via software)
  • Does the incoming operator review any software screens during handoff?
  • What questions does the incoming operator ask that the software could have answered?
  • How long does the handoff take? What percentage involves the software?
  • Are there paper-based handoff practices that compensate for software gaps?

Common handoff findings

  • Verbal-heavy handoffs: Operators rely on conversation rather than software because the software does not present a clear shift summary. Designing a “shift overview” screen that answers the incoming operator’s top 5 questions reduces verbal handoff time and information loss
  • Paper log persistence: Many plants maintain paper shift logs alongside digital systems because the software does not support structured handoff notes. This duplication is a research finding, not a user preference
  • Information decay: Critical context from 2-3 shifts ago is lost because the software only shows current state, not recent history. Operators cannot tell “this alarm has been going on for 3 shifts” from the interface alone

How to research legacy system migration

Legacy migration is the most common context for industrial software research. Users are transitioning from systems they have used for 10-20 years to modern platforms.

The muscle memory challenge

Operators who have used the same system for 10+ years do not use the interface. They use muscle memory. They know that the process overview is “three tabs right from the alarm screen,” that the batch report is “F7, then Enter, then Tab-Tab-Enter,” and that the trend display for reactor temperature is “the third chart on the second page.” Replacing this with a modern, well-designed interface feels like a regression because the old system, however ugly, was instant for experienced operators.

Legacy migration research approach

Step 1: Document the current system’s invisible UX. Before designing anything, map how operators actually use the legacy system: keyboard shortcuts, screen navigation sequences, information they check in what order, and workarounds they have built. This is the real specification for the new system.

Step 2: Side-by-side comparison testing. Give operators the same task on the legacy system and the new system. Measure time, errors, and satisfaction. The new system should match or exceed legacy system speed for the 10 most common tasks within 2 weeks of training.

Step 3: Parallel operation observation. During the transition period when both systems run simultaneously, observe which system operators default to and when. The tasks where they revert to the legacy system reveal where the new system has not yet earned their trust.

How to recruit manufacturing professionals for research

Role segmentation

RoleWhere they workWhat they useResearch value
Machine/process operatorFactory floor, control roomMES, SCADA, HMI panelsTest real-time monitoring, alarm response, process control
Process/manufacturing engineerOffice + floorMES, ERP, engineering toolsTest configuration, reporting, optimization workflows
Maintenance technicianThroughout plantCMMS, mobile devices, work ordersTest mobile UX, work order management, asset lookup
Quality inspectorLab + floorLIMS, quality modules, inspection toolsTest inspection workflows, data entry, deviation management
Plant/operations managerOffice + floor walksDashboards, KPI reports, ERPTest executive views, production reporting, decision support
Shift supervisorControl room + floorMES overview, alarm summary, shift handoff toolsTest shift management, escalation, team coordination

Where to find participants

  • Internal recruitment (for existing customers). Partner with the plant’s operations, CI (continuous improvement), or IT team to identify willing participants across roles and shifts
  • Manufacturing associations. ISA (International Society of Automation), SME (Society of Manufacturing Engineers), MESA International for MES-specific users
  • LinkedIn targeting. Search by title (Process Operator, Plant Manager, Manufacturing Engineer) + industry (Manufacturing, Chemical, Pharmaceutical, Food & Beverage)
  • CleverX verified B2B panels. Pre-screened manufacturing professionals filtered by role, industry, and system experience
  • Trade shows. Hannover Messe, PACK EXPO, Automate for engaged manufacturing professionals

Incentive benchmarks

RoleRate rangeNotes
Operator (on-shift observation)$50-100 supplementOften done during work hours with employer approval. Supplement rather than full incentive
Operator (off-shift session)$100-175/hrMust compensate for coming in on their day off
Engineer$125-225/hrStandard B2B professional rate
Manager / supervisor$175-300/hrHigher seniority rate
Maintenance technician$100-175/hrOften the hardest to schedule due to reactive work

Scheduling considerations

  • Shift schedules are inflexible. Operators cannot leave the floor for a 45-minute research session. Research must fit into breaks (10-15 minutes) or happen before/after shifts
  • Production schedules take priority. If the plant has a major production run, all research gets postponed. Build flexibility into your timeline
  • Union rules may apply. Some unionized facilities require union approval for any activity outside normal job duties, including research participation. Check with the plant’s HR team before recruiting
  • PPE and access. Every researcher needs safety training and appropriate PPE before entering production areas. Plan 1-2 days of lead time for safety orientation

Industrial software usability metrics

MetricWhat it measuresHow to captureTarget
Task completion under constraintsCan operators complete tasks with gloves, at distance, under noise?Observed usability testing in authentic conditions>95% for safety-critical tasks
Alarm response timeTime from alarm activation to operator acknowledgment and actionSystem logs + observation<5 min non-critical, <30 sec critical
Screen navigation efficiencyNumber of screens/clicks to reach commonly needed informationObservation + system logging<3 screens for any routine task
Shift handoff completenessDoes the software provide a clear shift summary?Handoff observation + post-handoff interview>80% of handoff questions answerable from software alone
Cognitive load (NASA-TLX)Mental workload during complex tasksPost-task NASA-TLX questionnaireScore decrease between legacy and new system
Error rateIncorrect entries, wrong process parameter selections, missed alarmsSystem logs + observation<2% for safety-critical inputs
Workaround countHow many paper/spreadsheet workarounds supplement the software?Observation + interviewDecreasing over time (each workaround is a product gap)
Legacy comparison timeTask completion time on new system vs. legacy systemSide-by-side testingWithin 10% of legacy for top 10 tasks within 2 weeks of training

Frequently asked questions (continued)

Can you do remote research for industrial software?

For some activities: yes. Remote interviews (during breaks or off-shift), remote heuristic evaluation of screen designs, and remote testing of non-production interfaces (reporting, configuration, planning) work well. For contextual inquiry, alarm response testing, and any research requiring physical environment context: no. The factory floor cannot be replicated remotely. Plan for a hybrid approach: remote for early design evaluation, on-site for validation and operational research.

How do you handle photography and recording restrictions on factory floors?

Many manufacturing facilities prohibit cameras and recording devices on the production floor due to IP, safety, or security concerns. Adapt by: using paper notes instead of video recording, sketching screen layouts and workstation configurations, using the facility’s own CCTV footage (with permission) for observation review, and having an internal champion who can take approved photos for you. If recording is permitted, use audio-only recording with participant consent and add visual notes separately.

How do you research industrial software for regulated industries (pharma, food)?

Regulated industries (FDA 21 CFR Part 11 for pharma, FSMA for food) add compliance requirements to every software interaction. Research must include: testing electronic signature workflows, validating audit trail comprehension, evaluating how compliance steps affect task efficiency, and ensuring that “user-friendly” design changes do not break regulatory compliance. Include quality/compliance team members in your research plan as both stakeholders and participants.

What is the ROI of industrial software user research?

Research consistently demonstrates 20-40% efficiency gains from dashboard reorganization, alarm rationalization, and navigation simplification. In manufacturing contexts, a 1% efficiency gain can translate to millions in production value. Document baseline task times and error rates before research, then measure improvement after implementation. This quantified ROI justifies continued research investment to plant leadership who evaluate everything by operational impact.