Cybersecurity product user research: a complete guide for product and UX teams
How to conduct user research for cybersecurity products. Covers recruiting security analysts, testing threat dashboards with mock data, screening CISSP holders, and research methods for SIEM, EDR, and SOAR tools.
Security analysts do not use your product the way you think they do.
They have 14 browser tabs open, three dashboards tiled across two monitors, a SIEM throwing 400 alerts per shift, and a Slack channel pinging about an active incident. Your threat detection dashboard is one of seven tools they rotate through every 20 minutes. They are not exploring features. They are triaging threats under time pressure where a missed alert can mean a breach.
That context changes everything about how user research works. Standard B2B research methods, designed for users who have time to think and explore, break down when your participants operate in high-pressure, time-critical environments where the cost of a usability failure is a security incident.
This guide covers how product and UX teams conduct effective user research for cybersecurity products, from structuring FAQ-driven discovery to recruiting security professionals who rarely have time for a 45-minute session.
Frequently asked questions
What makes cybersecurity product research different from other B2B research?
Four factors set it apart. First, your users work under time pressure that most B2B users never experience. A SOC analyst responding to an alert has minutes, not hours. Second, the data they work with (threat intelligence, incident reports, vulnerability scans) is often classified or sensitive, which limits what they can share in research sessions. Third, cybersecurity professionals are deeply skeptical by training. They evaluate your product through a threat lens, not just a usability lens. Fourth, the consequences of poor UX are not lost revenue or frustration. They are security breaches, data exposure, and compliance failures.
How do you recruit cybersecurity professionals for user research?
Target through LinkedIn (search by CISSP, CISM, CEH certifications and titles like SOC Analyst, Security Engineer, CISO), cybersecurity communities (Reddit r/netsec, r/blueteam, ISC2 community forums), and professional associations (ISC2, ISACA, SANS). Specialized B2B research panels with role verification cut screening time significantly. Expect to pay $150-400/hr depending on seniority. Over-recruit by 25-30% because cancellation rates are high due to incident response pulling participants away at short notice.
Can you use real security data in usability testing?
No. Always use mock data. Real threat intelligence, vulnerability reports, and incident data are classified or confidential in most organizations. Build realistic simulated environments with fictional IP addresses, synthetic log data, and mock threat indicators that mirror the structure and volume of real security data without exposing actual threats.
How many participants do you need for cybersecurity product research?
Five to eight per role per round. Cybersecurity teams have distinct roles (SOC analyst, threat hunter, security engineer, CISO, compliance officer) with different workflows and tool needs. Do not mix roles in a single study unless you are researching a cross-functional feature. Run multiple rounds as the product evolves.
Do you need NDAs for cybersecurity research sessions?
Yes, always. Cybersecurity professionals work with sensitive information and will not participate without a confidentiality agreement. Your NDA should cover both directions: you protect their identity and any information they share, and they agree not to disclose details about your unreleased product or prototypes. Some participants may also require their employer’s legal team to review the NDA before signing.
What is the biggest mistake teams make in cybersecurity UX research?
Testing in isolation from the security workflow. A threat dashboard tested as a standalone prototype produces different results than the same dashboard tested alongside a SIEM, a ticketing system, and a chat tool. Security professionals operate in multi-tool environments, and your research must reflect that context. Single-tool usability tests miss the integration friction that drives real-world adoption failures.
What research methods work best for cybersecurity products?
The content plan calls for methodology depth, so here is how each method adapts to the cybersecurity context.
Contextual inquiry in SOC environments
Observing security analysts in their actual work environment reveals patterns that interviews and usability tests cannot. SOC environments have unique dynamics: shift-based work, shared workstations, multi-monitor setups, and real-time alert processing.
What to observe:
- How analysts triage alerts across multiple tools (SIEM, EDR, SOAR, ticketing)
- Which alerts get investigated versus dismissed, and how quickly
- Workarounds analysts build when tools do not integrate (copy-pasting IPs between systems, maintaining personal spreadsheets of indicators)
- How information flows between Tier 1, Tier 2, and Tier 3 analysts during escalation
- Physical environment factors (screen layout, monitor count, ambient noise, interruption frequency)
Access constraints: SOC environments contain classified information. You will need organizational approval, an NDA between your company and theirs, and agreement on what the observer can and cannot view. Some organizations allow observation only during tabletop exercises or simulated incidents, not during real operations.
Usability testing with simulated threat scenarios
Standard task-based usability testing fails for security products because the urgency is missing. Asking a SOC analyst to “find the high-severity alert and investigate it” on a static prototype does not replicate the pressure of a real shift where 50 new alerts arrived in the last hour.
Build scenario-based tests that simulate realistic pressure:
- Alert triage. “Your shift started 30 minutes ago. Here are 47 new alerts from the last 4 hours. Walk me through how you would prioritize and investigate the first 5.”
- Incident response. “A phishing email was reported 10 minutes ago. Three users clicked the link. Show me how you would contain and investigate this using the dashboard.”
- Threat hunting. “You received intelligence that a specific malware family is targeting your industry. Show me how you would search for indicators of compromise across your environment.”
- False positive management. “You have been seeing the same type of alert repeatedly for the past week and believe it is a false positive. Walk me through how you would tune the rule or suppress it.”
Key principle: Include time pressure in the scenario. Do not give participants unlimited time to explore. Real security work happens under deadlines, and your test should reflect that.
User interviews with security professionals
Interviews with cybersecurity professionals require different framing than standard B2B interviews.
What works:
- Frame questions around incidents and workflows, not features. “Walk me through the last time you investigated a suspicious login” produces richer data than “How do you use the authentication dashboard?”
- Ask about trust. Security professionals have strong opinions about which tools they trust and why. “Which tool do you check first when something looks wrong, and why that one?” reveals hierarchy and trust patterns
- Ask about what they build themselves. Security teams often create custom scripts, queries, and dashboards because existing tools do not meet their needs. These DIY solutions reveal unmet product opportunities
What to avoid:
- Do not ask participants to describe actual incidents in detail. They often cannot due to confidentiality
- Do not assume they use your product the way you designed it. Ask open-ended questions about their actual workflow
- Do not use UX jargon. Say “How do you investigate alerts?” not “What is your alert triage user journey?”
Surveys for benchmarking
Surveys work for measuring satisfaction, feature prioritization, and benchmarking across security teams at scale. Keep surveys under 5 minutes. Distribute through security communities (ISC2, SANS mailing lists, LinkedIn security groups), not cold email.
High-value survey questions for cybersecurity products:
- How many alerts do you process per shift? (Establishes baseline for alert fatigue research)
- What percentage of alerts turn out to be false positives? (Quantifies the false positive problem)
- Which tools do you use most frequently during incident response? (Maps the competitive landscape)
- What is the most time-consuming part of your daily workflow? (Identifies pain points at scale)
How to screen cybersecurity research participants
“Cybersecurity professional” is as broad as “engineer.” A SOC Tier 1 analyst monitoring alerts has almost nothing in common with a CISO evaluating vendor risk. Screening must be precise.
Role segmentation framework
| Role | Daily work | Tools used | Research angle |
|---|---|---|---|
| SOC Analyst (Tier 1) | Alert monitoring, initial triage, escalation | SIEM (Splunk, Sentinel, QRadar), EDR, ticketing | ”Walk me through your first 30 minutes of a shift” |
| SOC Analyst (Tier 2/3) | Deep investigation, incident response, threat hunting | SOAR, forensic tools, threat intel platforms | ”Show me how you investigate an escalated alert” |
| Security Engineer | Tool configuration, detection rule writing, infrastructure | Cloud security, IaC, CI/CD pipeline security | ”How do you test and deploy a new detection rule?” |
| Threat Hunter | Proactive threat detection, hypothesis-driven investigation | Query languages, threat intel, custom scripts | ”Walk me through your last proactive hunt” |
| CISO / Security Director | Strategy, vendor selection, risk management, reporting | GRC platforms, executive dashboards, board reporting | ”How do you decide which security tools to invest in?” |
| Compliance/GRC | Regulatory compliance, audit prep, policy management | GRC tools, evidence collection, reporting | See our compliance software research guide |
Rule: Never mix more than 2 adjacent roles in a single study. Tier 1 and Tier 2 SOC analysts can be combined for alert workflow research. SOC analysts and CISOs should never be in the same study because their workflows, decision-making, and tool interactions are fundamentally different.
Screener questions that work
Keep to 5-7 questions. Cybersecurity professionals are busy and will abandon long screeners.
- What is your primary role in your organization’s security team? (Select: SOC Analyst, Security Engineer, Threat Hunter, CISO/Director, GRC/Compliance, Other)
- Which security tools do you use at least weekly? (Open text. Filters non-practitioners immediately)
- Describe a typical security task you complete daily. (Open text. Articulation check)
- How many security alerts does your team process per day/shift? (Range. Validates operational role)
- Which certifications do you hold? (Multi-select: CISSP, CISM, CEH, OSCP, CompTIA Security+, SANS GIAC, None)
- How many years have you worked in a cybersecurity-specific role? (Range: 1-2, 3-5, 6-10, 10+)
Red flags in screener responses
- Cannot name specific security tools they use daily
- Vague answers to workflow questions (“I handle security stuff”)
- Claims senior certifications but describes entry-level tasks
- Lists tools but cannot describe how they use them in context
How to handle sensitive security data in research
Cybersecurity research has stricter data sensitivity requirements than most B2B verticals. Participants work with threat intelligence, vulnerability data, and incident information that is often classified.
Safeguards for every session
| Safeguard | Implementation |
|---|---|
| Mock threat data | Build synthetic log data, fictional IP addresses, and simulated alerts. Never use real threat indicators |
| NDA (bidirectional) | Protect participant identity and their information. Protect your unreleased product details |
| Isolated test environment | Prototypes do not connect to real security infrastructure |
| No screen sharing of real systems | If observing real workflows, agree in advance on what the observer can see |
| Recording consent with limits | Specify who accesses recordings. Some participants require recordings be deleted within 30 days |
| Employer approval | Many security professionals need their employer’s security team to approve research participation |
Building realistic mock environments
Generic prototypes fail in cybersecurity research because security professionals immediately notice unrealistic data. Build mock environments that reflect:
- Realistic alert volume. A SOC processing 5 alerts in a test is not realistic. Use 40-100 alerts with varying severity levels
- Realistic false positive ratio. Include a 60-80% false positive rate. This is the reality most SOCs face
- Realistic tool context. Show your product alongside (even as screenshots of) the other tools analysts use simultaneously
- Realistic time stamps. Alerts should show recent timestamps, not static dates from months ago
Common pain points cybersecurity research reveals
Research across security teams consistently surfaces patterns that product teams miss.
Alert fatigue is the dominant UX problem. SOC analysts receiving hundreds of alerts per shift develop mental filters that override your product’s prioritization logic. If your severity ratings do not match their learned patterns, they ignore them. Research reveals how analysts actually prioritize (often based on source reputation and context, not your severity score) and where your ranking logic diverges from their mental model.
Dashboard overload kills situational awareness. Security dashboards that display everything simultaneously help no one. Analysts need progressive disclosure: start with what needs action now, then let them drill into context. Research reveals which data points analysts look at first, second, and never, so you can design the information hierarchy correctly.
Integration gaps cause dangerous delays. When analysts must copy an IP address from your tool, paste it into a SIEM, then paste the results into a ticket, each context switch adds 30-60 seconds. During an active incident with dozens of indicators to investigate, those seconds compound into minutes of lost response time. Research maps these context switches and reveals which integrations would have the highest impact.
Shadow tooling reveals unmet needs. Security teams build custom scripts, Jupyter notebooks, and spreadsheet trackers to compensate for gaps in commercial tools. These DIY solutions are a goldmine for product research. They reveal exactly what users need that your product does not provide, in a format they designed for themselves.
Trust calibration determines adoption. Security professionals do not trust tools by default. They calibrate trust through experience: does the tool catch what they would catch manually? Does it generate too many false positives? Does it explain why it flagged something? Research must explore trust formation, not just task completion.
How to recruit cybersecurity professionals
Cybersecurity professionals are among the hardest B2B participants to recruit. They are time-constrained, security-conscious (they will research your company before responding), and frequently pulled into incidents that override scheduled commitments.
Where to find participants
- LinkedIn targeting. Search by certification (CISSP, CISM, CEH), title (SOC Analyst, Security Engineer, CISO), and company
- Cybersecurity communities. Reddit r/netsec, r/blueteam, r/AskNetsec. ISC2 community forums. SANS community
- Professional associations. ISC2, ISACA, SANS, local BSides chapters
- CleverX verified B2B panels. Pre-screened security professionals with role and certification verification
- Conference networks. RSA, Black Hat, DEF CON, BSides attendee and speaker networks
- Bug bounty communities. HackerOne, Bugcrowd for security researchers and offensive security professionals
Incentive benchmarks
| Role | Rate range | Best incentive type |
|---|---|---|
| SOC Analyst (Tier 1, 1-3 years) | $125-175/hr | Cash or gift card |
| SOC Analyst (Tier 2/3, 4-8 years) | $175-275/hr | Cash or professional development credit |
| Security Engineer | $175-300/hr | Cash, conference ticket, or early product access |
| Threat Hunter | $200-350/hr | Cash, threat intel report, or tool access |
| CISO / Security Director | $350-500/hr | Advisory board, benchmark report, or peer networking |
Scheduling considerations
- Over-recruit by 25-30%. Incident response pulls participants away without warning
- Offer flexible 30-minute sessions. Security professionals rarely commit to 60 minutes
- Send reminders 48 hours, 24 hours, and 2 hours before (text, not just email)
- Have an asynchronous fallback. If they cancel, offer a 10-minute unmoderated task they can complete within 48 hours
- Avoid scheduling during major vulnerability disclosures or patch cycles (Patch Tuesday, major CVE releases)
For general participant recruitment strategies and guidance on recruiting niche research participants, see our dedicated guides.
Frequently asked questions (continued)
How do you research products for security teams that work in classified environments?
You cannot observe classified environments directly. Instead, use tabletop exercises with simulated scenarios that mirror the classified workflow without using real data. Recruit participants who can discuss their general workflow and tool preferences without disclosing classified information. Frame questions around process and frustration, not specific incidents.
Should you recruit offensive security (red team) or defensive security (blue team) professionals?
Depends on your product. Defensive security products (SIEM, EDR, SOAR) need blue team participants. Offensive security tools (penetration testing platforms, vulnerability scanners) need red team participants. Products used by both (threat intelligence, security orchestration) should recruit both, but in separate studies because their mental models and workflows differ significantly.
How do you test security features that users should never need to use?
Incident response features, breach notification workflows, and emergency access controls are critical but rarely exercised. Use tabletop scenarios: “Your organization has been breached. Walk me through how you would use this tool to contain the incident.” Participants draw on their training and past experience to engage realistically even if they have not used that specific feature before.
What certifications should the researcher understand before conducting cybersecurity studies?
You do not need security certifications, but you should understand the basics of the frameworks your participants reference. Spend 2-3 hours reading overviews of the NIST Cybersecurity Framework, MITRE ATT&CK framework, and the CIA triad (confidentiality, integrity, availability). This lets you ask informed follow-up questions and recognize when participants reference specific attack techniques or security controls.
How is cybersecurity product research different from compliance software research?
Compliance research focuses on regulatory workflows, audit preparation, and evidence management. Cybersecurity research focuses on threat detection, incident response, and security operations. There is overlap when researching security compliance features (SOC 2 evidence collection, vulnerability management for PCI DSS), but the user personas, time pressures, and mental models are fundamentally different. SOC analysts think in threats and indicators. Compliance officers think in controls and evidence.