How to recruit mental health app users for research: ethical screening, IRB considerations, and incentive strategies

How to ethically recruit mental health app users for user research. Covers IRB requirements for vulnerable populations, PHQ-2/GAD-2 stability screening, non-stigmatizing outreach, channels for reaching therapy app users, and compassionate exclusion protocols.

How to recruit mental health app users for research: ethical screening, IRB considerations, and incentive strategies

Recruiting mental health app users for research is unlike any other recruitment challenge in UX. Your participants may be managing depression, anxiety, PTSD, eating disorders, or substance use. A poorly designed recruitment process does not just produce bad data. It can stigmatize potential participants, attract people who are too vulnerable to participate safely, or exclude the very people whose perspective you need most.

Standard recruitment tactics, cold outreach, broad demographic filters, and generic screener surveys, fail for three reasons. First, mental health app users are protective of their privacy and will not respond to outreach that feels like it is targeting them based on a condition. Second, ethical and often legal obligations require you to screen for participant stability, which adds complexity that standard recruitment does not have. Third, IRB oversight may be required, which means your recruitment materials, screener, and consent process must meet institutional review standards before you contact a single participant.

This guide covers how to recruit mental health app users ethically, effectively, and in compliance with IRB expectations, from crafting non-stigmatizing outreach to implementing stability screening that protects participants without excluding the perspectives you need.

For broader context on conducting research with mental health app users (methods, trauma-informed principles, HIPAA, crisis protocols), see our mental health app research guide. For the cross-industry trauma-informed methodology, see our trauma-informed research guide. For HIPAA compliance specifics, see our HIPAA-compliant research guide.

Key takeaways

  • Recruit by behavior (“people who use mental health apps”), not by diagnosis (“people with depression”). This is more ethical, more practical, and produces a larger, more relevant participant pool
  • Stability screening using validated instruments (PHQ-2, GAD-2) is essential to ensure participants can engage safely. Screening protects participants, not just your data quality
  • IRB review is strongly recommended for any study recruiting based on mental health conditions, even for commercial research. IRB approval legitimizes your research and protects your organization
  • Compassionate exclusion is a skill. How you decline a participant who does not meet stability thresholds matters as much as who you include
  • Pay immediately and unconditionally. Tying incentives to session completion for vulnerable populations is coercive. State “full incentive regardless of completion” in every recruitment message

IRB considerations for mental health recruitment

When IRB review is needed

ScenarioIRB recommended?Why
Recruiting based on mental health app usage (“Do you use a therapy app?”)RecommendedImplied health condition, though not diagnostic
Recruiting based on mental health diagnosis (“People with depression”)Strongly recommendedVulnerable population, direct health condition targeting
Recruiting general users who happen to discuss mental health during sessionsNot typically requiredMental health disclosure is incidental, not recruitment basis
Research involving participants under 18 with mental health conditionsRequiredMinors + vulnerable population = highest protection tier
Research conducted in partnership with a university or healthcare providerRequiredInstitutional policy mandates IRB for all human subjects research
Findings will be published in academic journalsRequiredPublication requires IRB documentation

What IRB expects for mental health recruitment

Recruitment materials review. IRBs will review your recruitment posts, emails, screener surveys, and consent forms before you can use them. Submit all materials as part of your IRB application. Key requirements:

  • No coercive language (“You MUST participate” or “Don’t miss this opportunity”)
  • Clear statement of voluntariness and the right to decline without consequence
  • No targeting based solely on diagnosis without justification
  • Benefits and risks clearly stated

Vulnerable population justification. IRBs require you to justify why people with mental health conditions must be included in the study rather than using a non-vulnerable proxy. Your justification: “The study requires participants who use mental health apps because their direct experience with the product informs design decisions that affect their wellbeing. A non-user proxy cannot provide this perspective.”

Risk minimization plan. Document how you will minimize risks to participants:

  • Stability screening protocol (PHQ-2/GAD-2 thresholds)
  • Crisis protocol (what to do if a participant becomes distressed)
  • Researcher training in trauma-informed facilitation
  • Emotional check-ins during sessions
  • Post-session follow-up and resource provision
  • Right to withdraw at any time with full incentive

Equitable selection. IRBs check that you are not over-recruiting from populations that are easy to access (e.g., university students) while under-recruiting from populations your product actually serves. Demonstrate that your recruitment channels reach diverse demographics, income levels, and geographic locations.

IRB timeline

Plan for 4-8 weeks for IRB review, longer if revisions are needed. Start the IRB process before you need participants, not when you are ready to start testing. Many IRBs offer expedited review for minimal-risk research, which most usability studies qualify for.

Commercial research without formal IRB

If your organization does not have an IRB and the research is purely commercial (not academic, not partnered with a university), you may not be legally required to obtain IRB approval. In this case, follow IRB-equivalent standards: use the same consent forms, screening protocols, and risk minimization plans that an IRB would require. This protects participants and protects your organization from ethical and legal risk. Consider engaging a commercial IRB (WCG, Advarra, Sterling) for independent review, which typically costs $1,000-3,000 and takes 2-4 weeks.

How to screen for participant stability

Stability screening ensures participants can engage in research safely without exacerbating their condition. This is not about excluding people with mental health conditions. It is about ensuring the research experience is safe for them at this specific time.

The PHQ-2 and GAD-2 approach

The PHQ-2 (Patient Health Questionnaire, 2-item) and GAD-2 (Generalized Anxiety Disorder, 2-item) are validated ultra-brief screening instruments used widely in clinical settings.

PHQ-2 items:

  1. “Over the past two weeks, how often have you been bothered by little interest or pleasure in doing things?” (Not at all = 0, Several days = 1, More than half the days = 2, Nearly every day = 3)
  2. “Over the past two weeks, how often have you been bothered by feeling down, depressed, or hopeless?” (Same scale)

GAD-2 items:

  1. “Over the past two weeks, how often have you been bothered by feeling nervous, anxious, or on edge?” (Same scale)
  2. “Over the past two weeks, how often have you been bothered by not being able to stop or control worrying?” (Same scale)

Scoring and thresholds:

Score (PHQ-2 or GAD-2)InterpretationRecruitment action
0-2Below clinical thresholdInclude in study
3-4Mild to moderate symptomsInclude with enhanced monitoring (additional check-ins, researcher trained in trauma-informed methods)
5-6Moderate to severe symptomsExclude with compassionate messaging and resource provision

Important: These are screening tools, not diagnostic instruments. A score of 5 does not mean the participant has a clinical condition. It means they are experiencing enough distress that research participation may not be safe for them right now.

Compassionate exclusion protocol

How you exclude a participant matters as much as who you exclude. A poorly worded exclusion message can stigmatize, shame, or distress someone who was already vulnerable enough to flag on a screening instrument.

Exclusion message template:

Thank you so much for your interest in this study. We really appreciate you taking the time to share your experience. Based on your responses, we want to make sure this is the right time for you to participate in a study that discusses [topic]. Your wellbeing is our priority, and we would rather connect you with support resources than risk the study being a difficult experience.

Here are some resources that may be helpful:

  • 988 Suicide and Crisis Lifeline: call or text 988
  • SAMHSA National Helpline: 1-800-662-4357 (free, confidential, 24/7)
  • Crisis Text Line: text HOME to 741741
  • [BetterHelp / Talkspace link for accessible therapy]

We would love to include you in a future study. If you’d like, we can add you to our list and reach out when we have a study that might be a better fit. No pressure either way.

Thank you again for your willingness to help improve [product name].

What makes this work: No mention of “failing” the screener. No diagnostic language. Frames the decision as about timing, not about the person. Provides actionable resources. Leaves the door open for future participation.

Recruitment channels for mental health app users

Tier 1: Highest yield and lowest stigma

In-app recruitment (for existing products). If you have a live mental health app, recruit through the product itself. In-app banners shown after meaningful usage (not on first open) reach users with real context. This is the lowest-stigma channel because participants self-identify as users, not as people with a condition.

Wellness and self-care communities. Reddit r/selfcare, r/DecidingToBeBetter, r/meditation, wellness-focused Discord servers, and self-improvement communities. These attract people who use mental health tools without the stigma of condition-specific communities. Frame outreach around app improvement, not mental health research.

CleverX verified panels. Pre-screened participants filtered by app usage and behavioral criteria. Useful for reaching mental health professionals (therapists, counselors, psychiatrists) who evaluate apps from a clinical perspective.

Tier 2: Condition-adjacent communities

ChannelBest forApproachStigma risk
Reddit r/mentalhealth, r/therapyBroad mental health app usersPost in allowed recruitment threads. Frame as “help improve an app,” not “we’re studying your condition”Medium. Use behavioral framing
Reddit r/anxiety, r/depressionCondition-specific usersOnly if your app specifically targets this condition. Very careful, empathetic framingHigher. Extra sensitivity required
Support group organizations (NAMI, DBSA)Users connected to formal support systemsPartner with the organization. They share recruitment, not you contacting members directlyLow if organization-mediated
Therapist and counselor referralsPatients using therapy apps as part of treatmentTherapists share the opportunity with clients. Researcher never contacts patients directlyLow if clinician-mediated
University psychology participant poolsStudents with mental health app experienceIRB-approved recruitment through established participant systemsLow. Institutional safeguards exist

Tier 3: Professional evaluators

  • Mental health clinicians who recommend apps to clients can evaluate from a clinical UX perspective
  • Digital health product managers who work on competing products can provide competitive insights
  • Peer support specialists with lived experience who work in mental health services bridge both user and professional perspectives

Channels to avoid

  • Cold outreach based on mental health forum participation. Messaging someone because they posted in r/depression is invasive and potentially harmful
  • Targeting based on health-related ad data. Using ad platforms to target people based on mental health interests feels surveillant
  • Recruiting from crisis support channels. Never recruit from crisis hotlines, suicide prevention forums, or active support groups. These are safe spaces, not recruitment pools

Incentive strategies for mental health app research

Unconditional payment principle

Every recruitment message, consent form, and session script must state: “You will receive your full incentive regardless of whether you complete all tasks or choose to end early.” This is not just good practice. For vulnerable populations, conditional payment (requiring completion to receive incentive) is coercive and may be flagged by IRBs.

Incentive benchmarks

Participant typeSession typeRatePayment timing
App users30-min moderated session$75-125Immediately after session
App users2-week diary study$150-250 totalPartial at midpoint, remainder at end (even if they withdraw early)
App users60-min co-design workshop$100-200Immediately after workshop
Mental health clinicians30-min professional evaluation$150-300Immediately after session
Lapsed users30-min churned user interview$100-150Immediately (higher than active users due to re-engagement effort)

Alternative incentives

Some mental health app users respond to value-aligned incentives alongside or instead of cash:

  • Donation to mental health organizations (NAMI, Crisis Text Line, Mental Health America) in the participant’s name
  • Premium app subscriptions to the mental health app being tested or a competitor
  • Therapy session credits (BetterHelp, Talkspace) for participants who want accessible mental health support
  • Self-care product gift cards (wellness apps, meditation subscriptions, journaling tools)

Always offer cash as the primary option. Value-aligned alternatives are supplements, not replacements, because assuming a participant wants therapy credits based on their mental health app usage is presumptuous.

Screening questionnaire for mental health app user research

Full screener (7 questions, under 3 minutes)

Section 1: Relevance (behavioral, not diagnostic)

  1. Do you currently use, or have you used within the past 6 months, a mental health or wellness app? Which one(s)? (Open text. Primary filter)
  2. How often do you use mental health or wellness app features? (Daily / Several times a week / Weekly / Monthly / Rarely)
  3. What do you primarily use the app for? (Multi-select: mood tracking, meditation, therapy sessions, journaling, crisis support, sleep, anxiety management, other)

Section 2: Comfort and fit

  1. This study involves discussing your experience using mental health apps. How comfortable are you discussing this topic with a researcher? (Very comfortable / Somewhat comfortable / Prefer to keep details private / Not comfortable at all)
    • Route “prefer to keep details private” to usability-only tasks (navigation, settings, UI evaluation)
    • Exclude “not comfortable at all” with compassionate messaging

Section 3: Wellbeing screening (PHQ-2 + GAD-2)

  1. Over the past two weeks, how often have you been bothered by little interest or pleasure in doing things? (Not at all / Several days / More than half the days / Nearly every day)
  2. Over the past two weeks, how often have you been bothered by feeling down, depressed, or hopeless? (Not at all / Several days / More than half the days / Nearly every day)
  3. Over the past two weeks, how often have you been bothered by feeling nervous, anxious, or on edge? (Not at all / Several days / More than half the days / Nearly every day)

Scoring and routing:

PHQ-2 + GAD-2 combined scoreAction
0-4Include. Standard session protocol
5-8Include with enhanced protocol: extra check-ins, trained researcher, post-session follow-up call
9-12Exclude with compassionate messaging and resource provision

How to manage a mental health research panel

Wellbeing monitoring

Unlike standard research panels, mental health app user panels require ongoing wellbeing monitoring.

  • Pre-study wellbeing check. Before re-engaging a past participant for a new study, administer the PHQ-2/GAD-2 again. Someone who was stable 3 months ago may not be stable now
  • Participation frequency limits. Cap at one study per 2 months per participant. More frequent participation in emotionally demanding research creates cumulative burden
  • Opt-out monitoring. If a participant declines 2+ studies in a row after previously being active, check in: “We noticed you haven’t participated recently. No pressure at all, just wanted to make sure you’re doing well and that we haven’t done anything to make the experience negative”
  • Resource provision at every touchpoint. Include mental health resources in every communication, not just during sessions. Normalize resource-sharing so it does not signal concern

Panel composition

Maintain diversity across:

  • Mental health app types used (meditation, therapy, mood tracking, crisis support, journaling)
  • Demographics (age, gender, ethnicity, income level, geographic location)
  • Severity level (wellness-oriented users, mild symptom management, moderate condition management)
  • App engagement level (daily users, weekly, monthly, lapsed)
  • Professional evaluators (clinicians, peer support specialists) alongside end users

Data protection

Mental health panel data is sensitive. Apply HIPAA-equivalent protections even when HIPAA does not formally apply:

  • Encrypt all panel data at rest and in transit
  • Limit access to the panel database to authorized researchers only
  • Do not store PHQ-2/GAD-2 scores alongside identifying information
  • Purge screening data after recruitment decisions are made
  • Provide a clear, easy process for participants to remove themselves from the panel permanently

Frequently asked questions

Can you recruit from mental health subreddits?

With extreme care. Post in subreddits that explicitly allow research recruitment (check each subreddit’s rules). Never DM individuals based on their posts about mental health experiences. Frame your recruitment as “help improve a mental health app” not “we’re studying people with [condition].” Some subreddits (r/depression, r/SuicideWatch) prohibit recruitment entirely. Respect these boundaries absolutely.

How do you recruit participants who have stopped using a mental health app?

Lapsed users are the most valuable and hardest-to-reach segment. Recruit through: your own churned user database (if they opted in to future contact), general wellness communities (people who tried and stopped using mental health apps), and broader recruitment channels filtered by past app usage. Offer higher incentives ($100-150 for 30 minutes vs. $75-125 for active users) because re-engagement requires more effort. Frame the outreach as: “We want to understand why the app didn’t work for you so we can make it better for everyone.”

What happens if a screened participant’s condition worsens during a longitudinal study?

Include a mid-study wellbeing check in your protocol. For diary studies or multi-session research, re-administer the PHQ-2/GAD-2 at the midpoint. If scores have increased significantly (moved from the 0-4 range to 9+), pause their participation: “We want to check in because your wellbeing is more important than this study. We’d like to pause your participation and reconnect when the timing feels right. You’ll receive your full incentive.” Provide crisis resources. Document the decision per your IRB protocol.

Yes. Standard research consent forms require additions for mental health research: disclosure of potentially sensitive topics, explicit statement that emotional distress is possible and how it will be handled, crisis resources, the right to withdraw at any time with full incentive, and (if HIPAA applies) a HIPAA authorization section. If your study has IRB approval, the IRB will review and approve your consent form.

How do you handle a participant who lies on the stability screener to participate?

You cannot fully prevent this. Mitigate by: including the PHQ-2/GAD-2 within a broader screener (not labeled as “mental health screening”), training researchers to recognize distress during sessions regardless of screening results, and having a crisis protocol ready for every session. If a participant becomes distressed during a session despite passing the screener, follow your crisis protocol, not your research protocol. Their safety takes priority over data collection.