AdTech user research methods: a complete guide for product and UX teams

How to conduct user research for adtech products. Covers methods for DSPs, SSPs, ad servers, and attribution platforms. Includes real-time trading observation, programmatic workflow testing, and recruiting media buyers for research.

AdTech user research methods: a complete guide for product and UX teams

Advertising technology users make decisions in seconds that spend thousands of dollars. A media buyer adjusting a bid on a demand-side platform during a live auction does not have time to explore your interface, read tooltips, or consult documentation. They need to see the data, make the call, and move to the next campaign, all within a window measured in seconds, not minutes.

That speed requirement makes adtech user research fundamentally different from any other B2B product category. Standard usability testing, where participants carefully work through tasks at their own pace, does not replicate the velocity of real programmatic trading. According to industry data, the average adtech platform evaluation takes 2-4 weeks from first demo to decision, with buyers making their assessment within the first 3-5 sessions of actual use. If your platform does not feel fast and intuitive within those first sessions, it is eliminated regardless of feature depth.

This guide covers how product and UX teams conduct effective research for adtech products, from observing real-time trading workflows to testing the dense data dashboards that media buyers live in 8 hours a day.

For marketing technology research (marketing automation, CRM, email, campaign builders), see our martech research guide.

Key takeaways

  • AdTech research must test under real-time pressure. Usability testing at the participant’s own pace produces findings that do not survive contact with a live trading environment
  • The average adtech platform evaluation takes 2-4 weeks, with the first 3-5 sessions determining the outcome. Onboarding and first-use research are disproportionately important
  • Media buyers, ad ops, and programmatic traders are different roles with different workflows. Research must segment by role, not by “adtech user”
  • Dashboard information density is the central UX challenge. AdTech dashboards display more data per screen than almost any other B2B product. Testing must evaluate whether density supports decisions or overwhelms them
  • Cross-platform workflows (DSP + DMP + analytics + verification) define the real user experience. Testing your platform in isolation misses the integration friction that drives adoption decisions

What makes adtech research different?

Five factors distinguish adtech research from standard B2B product research and from martech research.

1. Real-time decision velocity. Programmatic traders make bid adjustments, budget shifts, and targeting changes during live campaigns where timing matters. A 30-second delay in finding the right control can cost thousands in wasted spend. Research must replicate this time pressure.

2. Extreme data density. AdTech dashboards display campaign metrics, bid data, audience segments, frequency caps, pacing curves, creative performance, and verification scores simultaneously. The information density per screen exceeds most other B2B products. Users do not read dashboards. They scan them in patterns that research must map.

3. Financial stakes per interaction. Every interaction in an adtech platform has immediate financial consequences. A misconfigured bid, a wrong audience segment, or an overlooked frequency cap wastes real budget in real time. This changes how users interact with controls: they double-check, they hesitate on irreversible actions, and they build safety habits that affect workflow speed.

4. Multi-platform workflows. No adtech platform operates alone. Media buyers work across DSPs (The Trade Desk, DV360, Amazon DSP), DMPs, verification tools (IAS, DoubleVerify), analytics platforms, and client reporting tools. Your platform is one tab in a 12-tab workflow. Research must account for this context.

5. Compliance pressure. Privacy regulations (GDPR, CCPA, COPPA), brand safety requirements, and transparency mandates add compliance workflows to every campaign. Research must test whether compliance features support or slow the trading workflow.

Which research methods work for adtech products?

MethodBest forAdTech adaptation
Usability testingTesting campaign setup, bid management, audience building, reportingUse realistic data volumes (thousands of line items, not 10). Include time pressure in scenarios
Contextual inquiryObserving real-time trading, dashboard scanning patterns, multi-platform workflowsShadow during live campaign management. Observe which platforms are open simultaneously and how data flows between them
User interviewsUnderstanding evaluation criteria, platform switching triggers, workflow pain pointsAsk about complete campaign workflows, not individual features. “Walk me through managing a campaign from launch to optimization”
Dashboard scan-path researchUnderstanding how traders visually process dense data screensEye tracking or think-aloud during dashboard review: “What do you look at first? What are you looking for?”
SurveysFeature prioritization, satisfaction benchmarking, competitive positioning at scaleInclude questions about platform speed, data accuracy trust, and integration friction alongside standard NPS
Competitive benchmarkingTesting your platform against competitors for the same tasksSame campaign setup task across 2-3 platforms. Compare time, accuracy, and satisfaction
Card sortingOrganizing metrics, filters, and navigation for trader mental modelsTest with real adtech terminology (CPM, CTR, ROAS, viewability, frequency). Traders categorize metrics differently than product teams assume
Failure mode testingTesting what happens when data is delayed, bids fail, or integrations disconnectDeliberately simulate: delayed reporting data, failed bid submissions, disconnected verification feeds. Observe recovery behavior

How to test adtech dashboards

The information density challenge

AdTech dashboards are among the densest interfaces in B2B software. A single campaign management view might display:

  • Campaign name, status, and date range
  • Budget (total, spent, remaining, daily pace)
  • Performance metrics (impressions, clicks, CTR, conversions, ROAS)
  • Bid data (current bid, bid range, win rate, avg CPM)
  • Audience segments and targeting criteria
  • Creative performance by variant
  • Frequency and recency data
  • Verification scores (viewability, brand safety, fraud rate)
  • Pacing curves and delivery graphs

All on one screen. Research must test whether this density helps or hurts.

Dashboard testing protocol

Scan-path test. Show the dashboard for 10 seconds and ask: “What is the most important thing on this screen right now?” Then ask: “Is this campaign performing well or poorly? How do you know?” This reveals whether the information hierarchy matches trader decision-making.

Anomaly detection test. Seed the dashboard with one anomaly (a campaign pacing 40% behind schedule, a sudden CPM spike, a creative with 0% viewability). Measure: how quickly does the trader spot the anomaly? What do they look at first? How many data points do they check before acting?

Comparison test. Show 3 dashboard layouts with the same data but different information hierarchy:

  • Layout A: Performance metrics first (impressions, clicks, conversions)
  • Layout B: Financial metrics first (spend, CPM, ROAS)
  • Layout C: Alerts and anomalies first (pacing issues, budget alerts, performance drops)

Ask traders to complete the same task on each layout. Measure time and preference. The layout that supports the fastest accurate decision wins, regardless of which looks “cleaner.”

Dashboard metrics

MetricWhat it measuresTarget
Time to anomaly detectionHow quickly traders spot a problem in the dashboard<10 seconds for critical anomalies
Scan efficiencyHow many screen areas traders look at before finding what they need<5 fixation areas for routine checks
Decision confidenceHow confident traders feel in their dashboard-informed decisions (1-7 scale)5+ average
Data interpretation accuracyCan traders correctly interpret the metrics shown?>90% correct interpretation
Action speedTime from identifying a problem to taking corrective action<30 seconds for common adjustments

How to test real-time trading workflows

Time-pressure testing protocol

Standard usability testing gives participants unlimited time. AdTech testing must include realistic time constraints.

Scenario design with pressure:

ScenarioTime constraintWhat it tests
”Your top campaign is pacing 30% behind. Find out why and fix it”2 minutesDashboard navigation, troubleshooting workflow, bid adjustment speed
”A client asks for yesterday’s performance numbers. Pull the report”3 minutesReporting speed, data accessibility, export workflow
”You need to shift $5,000 from an underperforming campaign to a top performer”2 minutesBudget management, cross-campaign workflow, confirmation safeguards
”A brand safety alert fired on your largest campaign. Investigate and respond”3 minutesAlert navigation, verification data access, campaign pause workflow
”Build an audience segment of users who viewed the product page but did not convert, aged 25-44, in the top 5 DMAs”5 minutesAudience builder complexity, filter logic, segment size estimation

Observation during time-pressure tasks:

  • Where do traders hesitate? (Indicates unclear UI or missing information)
  • Where do they make errors? (Indicates confusing controls or dangerous defaults)
  • When do they give up and use a workaround? (Indicates workflow gaps)
  • Do they complete the task within the time limit? (Indicates overall platform speed)

The “trading floor” observation

The highest-value adtech research method: shadow a media buyer or programmatic trader during a live trading session (2-4 hours). This is contextual inquiry adapted for the speed of adtech.

What to observe:

  • How many platforms are open simultaneously? Where does your platform sit in the tab order?
  • How often do they switch between platforms? What triggers a switch?
  • How do they monitor multiple campaigns at once? Tiled windows? Separate monitors? A summary dashboard?
  • When an alert or anomaly appears, what is their first action? Which platform do they check first?
  • What data do they copy-paste between platforms? (Each copy-paste is an integration failure)
  • When do they consult colleagues versus making solo decisions? What triggers escalation?

How to research adtech onboarding and evaluation

The 3-5 session window

Industry data indicates that media buyers evaluate new adtech platforms within 2-4 weeks, with the first 3-5 real usage sessions determining their assessment. This makes the onboarding-to-evaluation window the highest-leverage research investment.

First-session testing protocol:

  1. Start from the platform login (account already created, but no campaigns configured)
  2. Task: “Set up and launch your first campaign using [specific targeting criteria and budget]”
  3. Observe: Where do they get stuck? What is confusing? How long before they have a live campaign?
  4. Measure: Time to first live campaign, error count, help-seeking frequency

Third-session testing protocol:

  1. Participant has used the platform twice before (either in real usage or previous test sessions)
  2. Task: “Optimize your existing campaign based on yesterday’s performance data”
  3. Observe: Have they learned the interface? Are they faster? What do they still struggle with?
  4. Measure: Time improvement vs. first session, feature discovery, confidence rating

The evaluation question (after session 5): “Based on your experience so far, would you recommend this platform to your team? Why or why not?” This question at session 5 predicts actual adoption better than any usability metric.

How to research cross-platform adtech workflows

Platform ecosystem mapping

Before testing your product, map the participant’s full adtech stack:

Platform typeCommon toolsIntegration point with your product
DSPThe Trade Desk, DV360, Amazon DSP, XandrCampaign data, bid management, audience sync
DMP / CDPLiveRamp, Oracle Data Cloud, SegmentAudience segments, first-party data, identity resolution
Ad serverGoogle Campaign Manager, SizmekCreative trafficking, impression tracking, frequency management
VerificationIAS, DoubleVerify, MOATViewability data, brand safety scores, fraud detection
Analytics / AttributionGoogle Analytics, AppsFlyer, AdjustConversion data, attribution models, ROAS calculation
Reporting / BILooker, Tableau, Google SheetsClient reporting, cross-platform aggregation, custom dashboards

Integration friction research

After mapping the stack, observe the participant completing a task that spans 2-3 platforms:

“Pull a cross-platform performance report for last week’s campaign that includes DSP delivery data, verification scores, and attribution-based conversions.”

Map every friction point:

  • Data export from platform A (format, time, completeness)
  • Data import to platform B (mapping, transformation, manual adjustments)
  • Data reconciliation (do numbers match across platforms? If not, which does the trader trust?)
  • Report compilation (how much manual work to combine data from multiple sources?)

Each step where the trader leaves your platform, exports data, or manually combines sources is an integration failure your product can solve.

How to recruit adtech professionals for research

Role segmentation

RoleDaily workResearch value
Programmatic traderReal-time bid management, campaign optimization, audience targetingTest trading workflows, dashboard speed, bid management UX
Media buyer / plannerCampaign strategy, budget allocation, vendor selection, client reportingTest planning tools, reporting, and evaluation workflows
Ad opsCampaign trafficking, creative management, tag implementation, QATest setup workflows, trafficking UI, and QA tools
Data / audience analystAudience building, data analysis, segment creation, attributionTest audience builders, data visualization, and segment tools
Ad ops manager / VPTeam oversight, platform evaluation, vendor management, client relationshipsTest executive views, team management features, and evaluation criteria

Where to find participants

  • LinkedIn targeting. Search by title (Programmatic Trader, Media Buyer, Ad Ops Manager) + tool keywords (The Trade Desk, DV360, Xandr)
  • AdTech communities. AdExchanger forums, Digiday Slack, r/adops on Reddit, programmatic trading Discord servers
  • Industry events. Programmatic I/O, AdExchanger conferences, Digiday events
  • CleverX verified B2B panels. Pre-screened advertising professionals filtered by platform expertise, role, and agency vs. brand-side
  • Agency networks. Media agencies (GroupM, Publicis, Dentsu) have large trader teams. Recruit through agency innovation or training departments

Incentive benchmarks

RoleRate rangeBest incentive type
Programmatic trader (1-5 years)$125-200/hrCash (instant payment critical for traders who value speed)
Senior trader / team lead$175-275/hrCash or industry conference ticket
Media buyer / planner$125-225/hrCash or benchmark report
Ad ops specialist$100-175/hrCash or tool credits
VP / Director of programmatic$250-400/hrAdvisory role, benchmark report, or peer networking

Screening questions

  1. Which adtech platforms do you use at least weekly? (Open text. Filters non-practitioners)
  2. Describe a campaign optimization decision you made in the last week. (Open text. Articulation check: real traders describe specific bid adjustments, audience changes, or pacing corrections)
  3. How many campaigns are you typically managing simultaneously? (Range: 1-5, 6-15, 16-50, 50+. Indicates workflow complexity)
  4. Do you work primarily on the buy side (DSP), sell side (SSP), or both? (Segment by side)
  5. What is the typical monthly media budget you manage? (Range: under $50K, $50-250K, $250K-1M, $1-5M, $5M+. Segment by scale)

AdTech-specific usability metrics

MetricWhat it measuresHow to captureTarget
Time to first live campaignOnboarding speed from account creation to first impression servedFirst-use observation<30 minutes for simple campaign
Campaign setup accuracyDoes the campaign configuration match the buyer’s intent?Post-task review: compare setup to stated targeting and budget goals>95% match
Optimization response timeHow quickly a trader identifies and acts on a performance issueTime-pressure scenario observation<2 minutes for common optimizations
Dashboard scan timeHow long to extract key performance data from the main dashboardTimed “what is happening?” test<10 seconds for campaign status, <30 seconds for detailed metrics
Cross-platform workflow timeTime spent on tasks that span your platform and other toolsObservation + diary study tracking platform switchesCross-platform tasks should be <2x single-platform equivalent
Error rate under pressureMistakes made during time-constrained tasksCount errors during time-pressure scenarios<5% of actions result in errors
Platform evaluation scoreOverall assessment after 3-5 sessions of real usagePost-evaluation interview + rating4+/5 recommend to team

Frequently asked questions

How is adtech research different from martech research?

Martech research focuses on campaign creation, email automation, lead nurturing, and CRM workflows at a pace measured in hours and days. Adtech research focuses on programmatic trading, real-time bidding, and campaign optimization at a pace measured in seconds and minutes. The users are different (media buyers vs. marketing managers), the tools are different (DSPs vs. marketing automation), and the pressure is different (real-time financial decisions vs. campaign planning). Research methods require corresponding speed adjustments.

Can you test adtech platforms without live campaign data?

You can test with mock data, but it reduces validity. Media buyers immediately notice unrealistic data (CPMs that are too low, CTRs that are too high, audience sizes that do not match real inventory). If using mock data, calibrate it to realistic industry benchmarks. Better approach: recruit participants who will test with their own accounts (with data handling agreements), or partner with an agency that provides a sandbox with realistic data.

How do you test privacy and compliance workflows in adtech?

Include GDPR consent management, CCPA opt-out handling, and COPPA compliance checks in your usability testing. Give scenarios like: “A user exercises their right to data deletion under GDPR. Show me how you handle this in the platform.” Test whether compliance workflows are accessible during campaign setup or buried in settings. Interview about compliance burden: “How much time does privacy compliance add to your campaign workflow?”

How many participants do you need for adtech research?

5-8 per role per round for qualitative methods. Since adtech has distinct roles (traders, buyers, ad ops, analysts), a comprehensive study needs 15-25 participants across roles. For dashboard scan-path research, 8-10 participants with eye tracking. For competitive benchmarking, 5-8 per platform comparison.

Should you research agency traders and brand-side traders separately?

Yes. Agency traders manage multiple clients, switch between campaigns frequently, and prioritize speed and volume. Brand-side traders manage fewer campaigns in deeper detail, prioritize precision and reporting, and have closer relationships with the data. Their workflows, priorities, and evaluation criteria differ significantly. Mixing them produces averaged findings that apply to neither.