User research for B2B security software: a complete guide to researching security buyers
How to conduct user research for B2B security software buying decisions. Covers screening security buyers, multi-stakeholder research design, CISO interview methods, and concrete screener criteria for decision-makers vs. influencers.
The person who buys your security software is not the person who uses it every day.
A CISO evaluating your SIEM platform cares about integration coverage, vendor risk posture, and board-reportable metrics. The SOC analyst who will spend 8 hours a day in your interface cares about alert triage speed, query performance, and whether the dashboard makes sense at 3 AM. These are fundamentally different research problems that require different participants, different questions, and different methods.
Most security product teams research the daily user experience well (or at least they try). Far fewer research the buying experience. And that is where deals are won or lost. Research shows that 49% of B2B security buyers form vendor opinions after minimal exposures, and 53% complete their purchase within 1-3 months. If your product team does not understand what drives those early impressions and fast decisions, you are optimizing the wrong side of the product.
This guide covers how to conduct user research specifically with B2B security software buyers, from screening CISOs and security directors to understanding the multi-stakeholder decision-making process that determines whether your product gets purchased.
For research focused on daily security tool users (SOC analysts, security engineers, threat hunters), see our cybersecurity product user research guide.
Key takeaways
- Security software buying decisions involve 4-8 stakeholders across security, IT, procurement, and finance. Research must cover the full committee, not just the CISO
- Screen buyers by purchasing authority and evaluation involvement, not job titles. Many people with “security” in their title influence but do not decide
- The buying journey has distinct phases (problem recognition, vendor discovery, evaluation, PoC, procurement) that require different research methods at each stage
- CISOs evaluate security tools differently than other B2B software buyers. Integration coverage, vendor security posture, and peer validation outweigh feature lists and pricing
- Proof-of-concept (PoC) observation is the highest-value research method for understanding buying decisions, because it reveals what buyers actually test versus what they say matters
Who is involved in B2B security software buying decisions?
Security software purchases are rarely made by a single person. Research with only CISOs misses the stakeholders who shape, gate, or block the decision.
The buying committee
| Role | Involvement | What they evaluate | Research priority |
|---|---|---|---|
| CISO / VP of Security | Decision maker, budget owner | Strategic fit, risk reduction, vendor trust, board reportability | High: primary research target |
| Security Director / Manager | Evaluator, recommender | Technical capabilities, integration with existing stack, team adoption | High: shapes the shortlist |
| SOC Lead / Security Engineer | Technical evaluator, PoC tester | Daily usability, detection quality, alert noise, query performance | High: PoC feedback determines adoption |
| IT Director / CTO | Infrastructure approver | Architecture fit, deployment complexity, cloud/on-prem compatibility | Medium: can block on technical grounds |
| Procurement / Vendor Management | Process gatekeeper | Pricing, contract terms, vendor risk assessment, compliance certs | Medium: controls timeline |
| CFO / Finance | Budget approver | ROI, total cost of ownership, contract flexibility | Low: rarely involved in product evaluation |
| Legal / Compliance | Compliance reviewer | Data handling, regulatory alignment, liability terms | Low: involved late in process |
Research design implication: A complete buyer research program interviews at least 3 roles: the decision maker (CISO/VP), the technical evaluator (Security Director/Engineer), and the process gatekeeper (Procurement). Interviewing only CISOs gives you strategic priorities but misses the technical and procedural friction that kills deals.
Concrete screener criteria for security buyers
The content plan calls for concrete screener criteria, so here they are. Screening security buyers is harder than screening daily users because “involved in purchasing” means different things to different people.
Must-have criteria (disqualify if not met)
- Evaluated, recommended, or approved the purchase of a security product within the last 18 months
- Had direct involvement in at least one of: defining requirements, shortlisting vendors, running a PoC, approving budget, or signing the contract
- Currently employed at a company with a dedicated security team (not a solo IT generalist handling security as a side responsibility)
- Company size of 200+ employees (below this threshold, security purchases are typically made by IT generalists without a formal evaluation process)
Decision-maker vs. influencer screening
This is the critical distinction. Your research questions are different for someone who approves the purchase versus someone who recommends a tool to their manager.
Questions that identify decision-makers:
- In your most recent security tool purchase, were you the person who gave final approval to proceed? (Yes/No. If no, they are an influencer)
- Do you control or directly influence the security tools budget? (Yes: decision-maker. “I recommend but my VP approves”: influencer)
- Could you veto a security tool purchase that your team recommended? (Yes: decision-maker)
Questions that identify influencers (still valuable, but segment separately):
- In your most recent security tool evaluation, what was your specific role? (Open text. Look for “I tested it” or “I wrote the requirements” vs “I approved the purchase”)
- Who made the final decision on which tool to purchase? (Open text. Reveals the actual decision-maker if this person is not it)
Screener questions for security buyer research
Keep to 6-8 questions. Security leaders are busier than security practitioners.
- When did you last evaluate or purchase a security product for your organization? (Within 6 months / 6-12 months / 12-18 months / Over 18 months ago. Disqualify over 18 months: memory fades and the market changes fast)
- What type of security product did you evaluate? (Multi-select: SIEM, EDR/XDR, SOAR, IAM, CSPM, vulnerability management, email security, DLP, other. Segment by category)
- What was your role in the evaluation process? (Open text. Articulation check: genuine buyers describe specific activities)
- How many vendors did your team evaluate? (Range: 1-2, 3-5, 6-10, 10+. Context for competitive research)
- How long did the evaluation process take from initial research to signed contract? (Range: under 1 month, 1-3 months, 3-6 months, 6-12 months, over 12 months)
- What was the approximate annual contract value? (Range: under $50K, $50-150K, $150-500K, $500K-1M, over $1M. Segment by deal size)
- What was the single biggest factor in your final decision? (Open text. High research value: reveals true decision drivers)
- How large is your security team? (Range: 1-5, 6-15, 16-50, 50+)
Red flags in screener responses
- Says they “evaluated” security tools but cannot name specific products or describe the process
- Claims decision-making authority but describes an evaluation role (“I tested it and gave feedback”)
- Last purchase was over 18 months ago and they cannot recall details
- Works at a company too small to have a formal security evaluation process
- Generic answers to the “biggest factor” question (“It was the best tool”)
What research methods work for security buyer research?
Different methods map to different phases of the buying journey.
Phase 1: Problem recognition and vendor discovery
Method: User interviews with CISOs and security directors
Research question: How do security leaders identify the need for a new tool and build their initial vendor shortlist?
Questions that produce actionable data:
- “Walk me through the last time you realized your team needed a new [security tool category]. What triggered that realization?”
- “How did you build your initial list of vendors to evaluate? Where did you look first?”
- “What would make you immediately disqualify a vendor before even scheduling a demo?”
- “How do peer recommendations influence your shortlist? Where do you get peer input?”
What this reveals: Security buyers build shortlists through peer validation (CISO communities, ISC2 forums, vendor-neutral Slack groups), analyst reports (Gartner, Forrester), and incident-driven urgency. Understanding the trigger event (a breach, a failed audit, tool sprawl frustration, contract renewal) shapes how you position your product.
Phase 2: Evaluation and PoC
Method: PoC observation and post-PoC interviews
Research question: What do buyers actually test during a proof of concept, and what determines their evaluation outcome?
PoC observation approach:
- Ask buyers to walk you through their PoC evaluation criteria before the test begins
- If possible, observe the PoC setup and initial evaluation (with vendor-neutral positioning)
- Conduct a post-PoC interview within 1 week of PoC completion while memory is fresh
Post-PoC interview questions:
- “What did you test first during the PoC, and why?”
- “What surprised you, positively or negatively, during the evaluation?”
- “What did the PoC not test that you wish it had?”
- “How did you compare results across the vendors you evaluated?”
- “Who else on your team tested the product, and what was their feedback?”
What this reveals: Buyers test integration with their existing stack first, not features. If your product does not connect to their SIEM, their ticketing system, and their identity provider within the first 48 hours of PoC, it is effectively eliminated regardless of detection quality.
Phase 3: Procurement and decision
Method: Surveys and retrospective interviews
Research question: What factors determined the final vendor selection, and what nearly derailed the purchase?
Survey approach (for scale): Send a 5-minute survey to security leaders who recently completed a purchase. Focus on:
- Decision factors ranked by importance
- Information sources that influenced the decision
- Procurement friction points
- Timeline from shortlist to signed contract
Retrospective interview approach (for depth): Interview buyers 2-4 weeks after purchase completion.
- “What was the hardest part of getting this purchase approved internally?”
- “Was there a point where the deal almost fell through? What happened?”
- “If you could change one thing about the evaluation process, what would it be?”
- “How did your procurement team’s requirements affect your vendor choice?”
What drives security software buying decisions?
Research across security buyers consistently reveals patterns that differ from other B2B software categories.
Integration coverage outweighs feature depth. CISOs managing 25-40 security tools prioritize products that reduce tool sprawl, not products that add another best-of-breed point solution. The first question is “Does it integrate with what we already have?” not “Does it have the best detection engine?”
Vendor security posture is a qualifying criterion. Security buyers evaluate your company’s security practices before evaluating your product. SOC 2 Type II certification, transparent security practices, and a responsible disclosure program are table stakes. A security product from a vendor that was breached is effectively unsellable.
Peer validation trumps marketing. CISOs trust other CISOs. G2 reviews, Gartner Peer Insights, CISO community recommendations (Slack, LinkedIn groups, local CISO circles), and direct reference calls drive shortlisting more than any marketing content. Research should explore which communities and review sources your target buyers trust.
Speed of PoC results matters more than PoC depth. Buyers who can see value within the first week of a PoC are significantly more likely to purchase. Products that require 4-6 weeks of professional services to configure for a PoC lose to products that demonstrate value on day one.
Board reportability is a decision factor. CISOs must justify security investments to their board. Products that generate clear, executive-friendly risk reduction metrics have an advantage over products that require manual reporting. Research should explore what metrics CISOs report to their board and how your product supports (or fails to support) that reporting.
How to recruit security buyers for research
Security buyers (CISOs, security directors) are even harder to recruit than security practitioners. They receive constant vendor outreach and have learned to ignore anything that looks like a disguised sales call.
Channels that work
- CISO communities and peer networks. CISO Slack groups, Evanta/Gartner CISO circles, local CISO roundtables. Engage as a research partner, not a vendor
- LinkedIn targeting. Search by title (CISO, VP of Security, Director of Information Security) and company size. Personalize aggressively. Reference their company’s tech stack or recent security investments if publicly known
- CleverX verified B2B panels. Pre-screened security leaders with role verification and purchasing authority confirmation
- Conference networks. RSA, Black Hat executive briefings, Evanta CISO summits
- Customer and prospect referrals. Ask your sales team for warm introductions to buyers who recently evaluated your product (won or lost deals)
Incentive benchmarks
| Role | Rate range | Best incentive type |
|---|---|---|
| Security Director / Manager | $200-350/hr | Cash, benchmark report, or conference ticket |
| CISO / VP of Security | $400-600/hr | Advisory board, peer networking, benchmark report |
| Procurement / Vendor Management | $150-250/hr | Cash or gift card |
Outreach that works
Hi [Name], I’m researching how security leaders evaluate [tool category] and I’m looking for practitioners who have recently been through this process. This is product research, not a sales call. No vendor pitch, no follow-up emails from sales. 30 minutes, $[amount], fully confidential. Your perspective as someone who [specific detail: recently deployed XDR / manages a team of 20+ / evaluated SIEM tools] would be incredibly valuable. Open to a quick call?
What makes this work: explicitly states “not a sales call,” references their specific experience, short time commitment, and high incentive stated upfront.
How to handle sensitive buying data in research
Security buyers are cautious about sharing evaluation details because:
- Vendor names and pricing are often under NDA
- Internal budget details are confidential
- Security architecture details (what tools they use, what gaps exist) are sensitive from a threat perspective
Safeguards
- Allow participants to anonymize vendor names (“Vendor A, Vendor B”) if they cannot share specifics
- Never ask for exact contract values. Use ranges
- Do not ask about specific security gaps or vulnerabilities in their environment
- Make clear that findings will be anonymized and aggregated, never attributed
- Offer to share the final research report as a participant benefit
Frequently asked questions
How is this different from researching security tool users?
User research focuses on daily workflows: alert triage, investigation, reporting. Buyer research focuses on evaluation workflows: vendor discovery, PoC testing, procurement, internal approval. The participants are different (CISOs vs SOC analysts), the questions are different (what drives purchase decisions vs what drives task completion), and the methods are different (retrospective interviews vs usability testing). See our cybersecurity product research guide for the daily user angle.
How many buyer participants do you need?
Eight to twelve across roles. Aim for 4-5 decision-makers (CISOs/VPs), 3-4 technical evaluators (Security Directors/Engineers), and 2-3 process stakeholders (Procurement). This gives you coverage across the buying committee without over-indexing on a single perspective.
Should you research won deals, lost deals, or both?
Both, separately. Won-deal research reveals what worked in your favor. Lost-deal research reveals what eliminated you. Lost-deal interviews are harder to get (buyers feel awkward explaining why they chose a competitor) but significantly more valuable for product strategy. Offer a higher incentive and frame it as “help us improve,” not “tell us why you rejected us.”
Can your sales team conduct buyer research?
No. Buyers will not give honest feedback to someone associated with the vendor. Use a neutral researcher (internal UX/product researcher or third-party firm) and guarantee that sales will not receive individual responses. The moment a buyer thinks their feedback will trigger a sales follow-up, the data becomes worthless.
How do you research the buying process for products in a new category?
When your product creates a new category (or buyers do not yet recognize the problem you solve), standard buyer research does not work because there are no recent purchases to study. Instead, research the adjacent buying process. If you are selling a new type of cloud security tool, interview buyers who recently purchased CSPM or CWPP products. Their evaluation process, stakeholders, and decision criteria will be the closest proxy for how your product will be evaluated.