User research for mental health apps: trauma-informed methods, HIPAA compliance, and ethical research design
How to conduct user research for mental health apps. Covers trauma-informed research methods, HIPAA requirements for mental health research, crisis protocols, ethical consent for vulnerable populations, and recruiting participants with mental health conditions.
User research for mental health apps carries ethical weight that no other product category demands. Your participants may be experiencing depression, anxiety, PTSD, suicidal ideation, or substance use disorders. A poorly designed research session does not just produce bad data. It can re-traumatize a participant, trigger a crisis, or violate federal health privacy law.
This is not a reason to avoid research. It is a reason to do it correctly. Mental health apps that ship without rigorous user research produce products that feel clinical rather than supportive, miss the emotional nuances that drive engagement and drop-off, and risk causing harm to the very people they intend to help.
This guide covers how product and UX teams conduct effective, ethical user research for mental health apps, integrating trauma-informed principles, HIPAA compliance, and methods adapted for participants with mental health conditions.
Frequently asked questions
What is trauma-informed user research?
Trauma-informed user research is a research approach that recognizes participants may have experienced trauma and designs every aspect of the research process, from recruitment to session facilitation to data handling, to prevent re-traumatization. It follows five principles: safety (physical and emotional), trustworthiness (clear expectations, no surprises), choice (participants control their level of engagement), collaboration (participants are partners, not subjects), and empowerment (the research process builds rather than diminishes participant agency). These principles apply to all participants, not just those who have disclosed trauma, because you cannot know each participant’s history.
Does HIPAA apply to mental health app user research?
HIPAA applies when your research involves protected health information (PHI) from a covered entity (healthcare provider, health plan, or healthcare clearinghouse) or their business associates. If your mental health app collects, stores, or processes PHI (therapy session notes, diagnosis information, medication data, treatment plans), research involving that data requires HIPAA compliance. If your app collects general wellness data (mood tracking, meditation usage, journaling) without connecting to healthcare providers, HIPAA may not apply directly, but following HIPAA standards protects participants and your organization regardless. When in doubt, consult with your legal team and treat participant health data as PHI.
How do you handle a participant crisis during a research session?
Have a crisis protocol documented before any session begins. If a participant becomes distressed, shows signs of a mental health crisis, or expresses suicidal thoughts: (1) Pause the research immediately. (2) Acknowledge their distress with empathy, not clinical language: “I can see this is difficult. We can stop anytime.” (3) If they express active suicidal ideation, provide the 988 Suicide and Crisis Lifeline number (call or text 988) and stay with them until they connect or confirm they are safe. (4) Do not attempt to provide therapy or counseling. You are a researcher, not a clinician. (5) Follow up within 24 hours to check on their wellbeing. (6) Document the incident per your IRB or ethics protocol. Every researcher on your team must be trained on this protocol before conducting any session.
Do you need IRB approval for mental health app research?
More likely than for other product research. If your research involves participants recruited based on a mental health condition (depression, anxiety, PTSD), or if you are collecting health-related data, or if findings will be published academically, IRB review is strongly recommended even for commercial research. Mental health app users are considered a vulnerable population under federal research regulations. Consult your legal team. When IRB review is not formally required, follow IRB-equivalent ethical standards: informed consent, risk minimization, participant right to withdraw, and data protection.
What research methods work best for mental health apps?
Mixed methods with trauma-informed adaptations. User interviews for understanding emotional experiences, coping strategies, and app engagement patterns. Usability testing for testing therapeutic workflows, crisis feature accessibility, and onboarding sensitivity. Diary studies for tracking mood, engagement, and app usage patterns over time. Surveys for measuring satisfaction, perceived efficacy, and feature priorities at scale. All methods require trauma-informed adaptations: emotional check-ins, opt-out at any point, avoidance of triggering task design, and post-session debriefs.
How do you recruit participants with mental health conditions ethically?
Never recruit based solely on a diagnosis. Recruit based on app usage (“currently use a mental health or wellness app”) or behavior (“manage stress or anxiety as part of daily routine”) rather than clinical labels. Screen with care: use validated wellbeing measures (PHQ-2 for depression screening, GAD-2 for anxiety) to ensure participants are stable enough to participate safely, not to diagnose. Exclude anyone in active crisis. Provide clear information about what the research involves, including potentially sensitive topics, so participants can make an informed decision before committing.
Key takeaways
- Trauma-informed principles (safety, trustworthiness, choice, collaboration, empowerment) must guide every research decision, from recruitment messaging to session facilitation to data storage
- HIPAA compliance is required when research involves PHI. Even when not legally required, following HIPAA-equivalent standards protects participants and your organization
- Every research session needs a documented crisis protocol. Researchers must be trained to recognize distress and respond appropriately without providing clinical intervention
- Recruit based on behavior and app usage, not clinical diagnoses. Screen for stability, not symptoms
- Emotional check-ins at the start, middle, and end of every session are mandatory, not optional. They protect participants and produce more honest data
How to apply trauma-informed principles to research design
The five principles in practice
| Principle | What it means | How to implement in research |
|---|---|---|
| Safety | Participants feel physically and emotionally safe throughout the process | Conduct sessions in participant’s preferred environment (home, remote). Allow cameras-off option. Provide topics in advance so nothing is surprising. Have crisis protocol ready |
| Trustworthiness | Expectations are clear, consistent, and honored | Explain exactly what will happen before the session. Do not change the protocol mid-session. Follow through on every promise (incentive timing, data deletion, anonymization) |
| Choice | Participants control their engagement level | Every question is optional. “You can skip anything that feels uncomfortable” must be stated at the start and reinforced throughout. Offer multiple participation formats (video, audio-only, text-based) |
| Collaboration | Participants are partners, not subjects | Frame the research as “we’re building this together” rather than “we’re studying you.” Share findings with participants. Invite their interpretation of the data |
| Empowerment | The research process builds participant agency | Acknowledge their expertise in their own experience. Avoid language that positions them as “patients” or “sufferers.” Compensate fairly. Express genuine gratitude |
Session design adaptations
Before the session:
- Send a detailed overview of topics and activities 48+ hours before the session. No surprises
- Include a content warning if any task involves discussing difficult experiences
- Offer a pre-session call to answer questions and build rapport before the formal session
- Confirm the participant’s preferred name, pronouns, and communication preferences
During the session:
- Start with an emotional check-in: “How are you feeling right now? Is there anything I should know before we begin?”
- State the opt-out clearly: “At any point, you can skip a question, take a break, or end the session entirely. You’ll still receive your full incentive”
- Monitor for distress signals: voice changes, long pauses, visible discomfort, deflection, or rapid topic changes
- Mid-session check-in (after 15-20 minutes): “How are you doing? Would you like to continue, take a break, or stop?”
- If a participant becomes emotional, pause. Do not redirect to the next task. Acknowledge: “Thank you for sharing that. Take whatever time you need”
After the session:
- End with a closing check-in: “How are you feeling now compared to when we started?”
- Provide mental health resources regardless of whether distress occurred (normalize resource-sharing, do not make it contingent on crisis)
- Send a follow-up message within 24 hours thanking them and confirming incentive delivery
- If any distress occurred during the session, follow up with a phone call, not just a text/email
HIPAA compliance for mental health app research
When HIPAA applies
| Scenario | HIPAA applies? | Why |
|---|---|---|
| Research uses data from a therapy app connected to healthcare providers | Yes | App is a business associate handling PHI |
| Research uses de-identified mood tracking data from a wellness app | Likely no | General wellness data without healthcare provider connection |
| Research involves participants recruited through a healthcare provider | Yes | PHI used for recruitment |
| Usability testing with mock therapy data, no real patient information | No | No PHI involved |
| Diary study where participants log real therapy experiences | Potentially | Participant-generated health information may constitute PHI depending on context |
HIPAA requirements for research
When HIPAA applies, your research must include:
Administrative safeguards:
- Designated privacy officer responsible for research data handling
- Staff training on PHI handling for every researcher
- Business Associate Agreements (BAAs) with any third-party tools used in research (video conferencing, transcription, analysis platforms)
- Documented policies for data access, storage, and destruction
Technical safeguards:
- Encryption for all data at rest and in transit
- Access controls (only authorized researchers can access participant data)
- Audit trails documenting who accessed what data and when
- Secure, HIPAA-compliant platforms for video recording and data storage
Participant protections:
- HIPAA-compliant consent forms that detail how PHI will be used, who will access it, and participant rights
- De-identification of data per 45 CFR 164.514 (remove 18 identifiers or use expert determination)
- Minimum necessary standard: collect only the PHI needed for the research purpose
- Right to revoke authorization at any time, with clear process for data deletion
Practical approach
Use HIPAA-compliant tools for the entire research workflow:
| Research activity | HIPAA-compliant option | What to avoid |
|---|---|---|
| Video sessions | Zoom for Healthcare, Doxy.me, Microsoft Teams (with BAA) | Standard Zoom without BAA, Google Meet without BAA |
| Transcription | Rev (with BAA), Otter (with BAA), manual transcription | Free transcription tools without BAAs |
| Data storage | Encrypted cloud storage with BAA (AWS, Azure with HIPAA config) | Google Drive without BAA, Dropbox without BAA |
| Analysis | Local encrypted analysis, HIPAA-compliant research platforms | Dovetail or other tools without BAAs (check each vendor) |
| Survey collection | Qualtrics (with BAA), REDCap | Google Forms, Typeform (no BAA available) |
Which research methods work for mental health apps?
| Method | Best for | Trauma-informed adaptation | HIPAA consideration |
|---|---|---|---|
| User interviews | Understanding emotional experiences, coping strategies, engagement motivations | Emotional check-ins, opt-out at any question, avoid probing into specific trauma details unless participant volunteers | If discussing treatment experiences, PHI may be shared. Use HIPAA-compliant recording |
| Usability testing | Testing therapeutic workflows, crisis features, onboarding | Use mock scenarios, not real therapy situations. Test crisis features with participants who are not in crisis | Use mock data in prototypes. Do not test with real therapy records |
| Diary studies | Tracking mood, app engagement, and coping behavior over time | Daily emotional check-ins. Clear instructions on what to share vs. what to keep private. Exit plan if participant’s condition changes | Diary entries about mental health experiences may constitute PHI. Use HIPAA-compliant diary platforms |
| Surveys | Measuring satisfaction, perceived efficacy, feature priorities | Include validated wellbeing measures (PHQ-2, GAD-2) to monitor participant wellbeing, not just product metrics | Anonymize responses. Use HIPAA-compliant survey tools if collecting health data |
| Co-design workshops | Involving people with lived experience in design decisions | Participants as co-researchers, not subjects. Share power in the design process | Workshop discussions about mental health experiences may involve PHI. Establish ground rules about privacy |
| Contextual inquiry | Observing how users interact with the app in their real environment | Only if participant invites observation. Never observe therapy sessions or crisis moments | Real-environment observation may expose PHI. Strict protocols on what the observer can see |
Methods to avoid or adapt carefully
Focus groups. Standard focus groups require participants to share mental health experiences in front of strangers. This creates confidentiality risks and social pressure that can be harmful. If group research is needed, use small groups (3-4 participants), establish strict confidentiality agreements, and make every sharing prompt optional.
Unmoderated testing. Remote unmoderated sessions lack a moderator who can recognize and respond to participant distress. If using unmoderated methods, limit tasks to non-sensitive features (navigation, settings, content browsing) and exclude anything that involves emotional content, crisis features, or therapeutic workflows.
How to test sensitive mental health app features
Crisis feature testing
Crisis features (suicide hotline access, crisis text line integration, safety planning tools) are the highest-stakes components of any mental health app. They must be tested, but testing them requires extreme care.
Who to test with: Clinicians, crisis counselors, and UX professionals. Not users who are currently in crisis or have recent crisis history. Clinicians can evaluate whether the feature would work in a real crisis scenario based on their professional experience.
What to test:
- Can users find the crisis feature in under 10 seconds from any screen?
- Does the feature work without an internet connection?
- Is the language calming and non-clinical?
- Does the feature connect to a real resource (988 Lifeline, Crisis Text Line)?
- What happens if the user accidentally triggers the crisis feature?
What not to test with real users: Do not simulate a crisis scenario with participants who have mental health conditions. “Imagine you are feeling suicidal, what would you do?” is not an acceptable usability task. Test crisis flows with clinical professionals using scenario walkthroughs instead.
Therapeutic content testing
Testing content like guided meditations, CBT exercises, or journaling prompts:
- Test whether the content is understandable (comprehension) and engaging (willingness to continue)
- Test whether the tone feels supportive without being patronizing
- Test whether the pacing matches the user’s emotional state (not too fast for someone in distress, not too slow for someone in a good state)
- Ask: “How did this exercise make you feel?” not “Did you complete the exercise successfully?”
Onboarding testing
Mental health app onboarding is uniquely sensitive because it often involves questions about mental health history, current symptoms, or treatment goals.
What to test:
- Does the onboarding feel safe and non-judgmental?
- Is it clear why each question is being asked and how the data will be used?
- Can users skip sensitive questions without being penalized (reduced functionality, guilt-inducing messaging)?
- Does the app set appropriate expectations about what it can and cannot do? (It is a wellness tool, not a replacement for therapy)
How to recruit participants for mental health app research
Recruitment principles
Recruit by behavior, not diagnosis. “People who use mental health or wellness apps” not “people with depression.” This is both more ethical (avoids labeling) and more practical (larger pool, self-selected relevance).
Screen for stability, not symptoms. Use the PHQ-2 (2-item depression screener) and GAD-2 (2-item anxiety screener) to ensure participants are stable enough to engage safely. These are screening tools, not diagnostic instruments. Anyone scoring above clinical thresholds should be excluded from the study with a gentle, non-stigmatizing explanation and provided with mental health resources.
Provide full transparency before commitment. Before participants agree, share: exactly what topics will be discussed, whether any content might be emotionally challenging, how their data will be protected, and that they can withdraw at any time without losing their incentive.
Where to find participants
- Mental health app user communities. Reddit r/mentalhealth, r/therapy, r/anxiety, r/depression (recruit through allowed channels, not direct DMs)
- Wellness and self-care communities. Broader audience, lower recruitment friction
- CleverX verified B2B panels. Pre-screened participants with demographic and behavioral filters. Useful for reaching mental health professionals (therapists, counselors, psychiatrists) who evaluate apps professionally
- Therapist and counselor networks. Clinicians who recommend apps to clients can participate as professional evaluators
- University psychology departments. Access to participant pools with established ethical review processes
Incentive considerations
| Participant type | Rate range | Notes |
|---|---|---|
| App users (30-min session) | $75-125 | Pay regardless of completion. State this explicitly |
| App users (2-week diary study) | $150-250 total | Check in regularly. Provide exit option at every check-in |
| Mental health clinicians | $150-300/hr | Higher because they evaluate from clinical perspective |
| Participants with lived experience (co-design) | $100-200 per workshop | Acknowledge their expertise. Value their time equally to clinicians |
Payment timing: Pay immediately. Delayed payment for vulnerable participants feels exploitative. If your payment system requires processing time, communicate the timeline explicitly during consent.
Screening questions
- Do you currently use or have you recently used (within 6 months) a mental health or wellness app? Which one? (Open text. Filters for relevance)
- How often do you use mental health or wellness features on your phone? (Daily / Weekly / Monthly / Rarely)
- How would you describe your comfort level discussing your experience with mental health apps? (Very comfortable / Somewhat comfortable / Prefer not to discuss personal details. Route “prefer not to” participants to usability-only tasks, not emotional experience tasks)
- Over the past two weeks, how often have you been bothered by feeling down, depressed, or hopeless? (PHQ-2 item 1: Not at all / Several days / More than half the days / Nearly every day)
- Over the past two weeks, how often have you been bothered by little interest or pleasure in doing things? (PHQ-2 item 2)
Participants scoring 3+ on the PHQ-2 should be excluded with care: “Thank you for your interest. Based on your responses, we want to make sure this study is a good fit for you right now. We’d like to share some resources that might be helpful: [988 Lifeline, SAMHSA helpline, BetterHelp/Talkspace link]. We would love to include you in a future study.”
Frequently asked questions (continued)
How do you handle data from participants who disclose suicidal ideation during a session?
Follow your crisis protocol immediately: pause the research, acknowledge their distress, provide the 988 Suicide and Crisis Lifeline number, and stay with them until they connect or confirm they are safe. After the session, document the incident. Regarding the data: do not include the disclosure in your research findings unless the participant explicitly consents to its use for safety feature improvement. The participant’s safety takes absolute priority over data collection.
Can you test mental health apps with minors?
Only with additional safeguards. Parental or guardian consent is required in addition to the minor’s assent. For mental health topics, the sensitivity is heightened: parents may not know about their child’s mental health concerns, creating a confidentiality conflict. Consult with your IRB and a child psychologist before designing research with minors. Many mental health app teams test with adult users aged 18-25 as a proxy for the teen experience, though this introduces validity limitations.
How do you measure the effectiveness of a mental health app through research?
Distinguish between usability (can users navigate and use the app?) and efficacy (does the app improve mental health outcomes?). Usability research uses standard UX methods with trauma-informed adaptations. Efficacy research requires clinical study design (randomized controlled trials, validated outcome measures like PHQ-9, GAD-7), IRB approval, and often partnership with clinical researchers. Product teams typically focus on usability and engagement research, while efficacy studies are conducted separately with clinical research partners.
Should your research team include a clinician?
Strongly recommended for any research that involves participants with mental health conditions. A clinician (therapist, psychologist, or psychiatric nurse) can: review your research protocol for safety risks, advise on appropriate screening thresholds, serve as a clinical backup during sessions (available by phone if a crisis occurs), and interpret findings through a clinical lens. They do not need to be present in every session, but they should be involved in protocol design and available during sessions.
How do you avoid the “clinical language” problem in mental health UX research?
Mental health apps must feel supportive, not clinical. Research the language gap: test the same feature with clinical framing (“cognitive behavioral therapy exercise”) and accessible framing (“thought challenge activity”) and compare engagement, comprehension, and emotional response. Interview participants about the words they use to describe their experiences, and use those words in your product, not diagnostic terminology. “Feeling overwhelmed” resonates more than “experiencing acute anxiety symptoms.”