Higher education user research guide: methods for faculty, students, and institutional platforms

How to conduct user research for higher education technology. Includes a comparison table of faculty vs student research methods, LMS usability testing, FERPA compliance, academic calendar scheduling, and recruiting university participants.

Higher education user research guide: methods for faculty, students, and institutional platforms

Higher education technology serves two user populations that experience the same product completely differently. Students use the LMS to find assignments, submit work, check grades, and communicate with instructors. Faculty use the same LMS to create courses, design assessments, grade submissions, track plagiarism, manage rosters, and report outcomes. The interface may be identical, but the tasks, mental models, time pressures, and evaluation criteria are so different that researching them requires fundamentally different approaches.

This is the central challenge of higher ed UX research: the same platform must serve a 19-year-old student checking assignments on their phone between classes and a 55-year-old professor designing a 16-week course on their desktop. Research that studies only one group produces products that work for half the audience.

Research links intuitive edtech UX to 20-30% gains in student retention and course completion (EDUCAUSE). The stakes are high, the users are diverse, and the institutional procurement process adds layers of complexity that consumer and standard B2B research does not face.

For K-12 education research (COPPA, teacher gatekeepers, classroom observation), see our K-12 edtech research guide.

Key takeaways

  • Faculty and students require separate research tracks with different methods, different recruitment, and different metrics. The comparison table below maps the key differences
  • The academic calendar dictates everything. Research must align with semesters: early semester for onboarding research, mid-semester for workflow research, end-of-semester for assessment/grading research. Summer is available for faculty but not for student classroom observation
  • FERPA (not COPPA) is the primary compliance framework. Student education records are protected, and research involving grades, enrollment data, or academic performance requires FERPA-compliant protocols
  • Faculty are expert users who resist tools that do not respect their pedagogical expertise. Research must explore pedagogical fit, not just usability
  • Institutional procurement involves IT, academic affairs, and faculty governance. Research that only tests usability without addressing institutional requirements produces products that test well but never get purchased

Comparison table: faculty vs student research methods

DimensionFaculty researchStudent research
Primary research question”Does this tool support my teaching and reduce my workload?""Can I complete my academic tasks quickly and easily?”
Best methodsInterviews (30-45 min), task-based usability testing on course creation/grading, diary studies across a semesterUsability testing (20-30 min), surveys at scale, analytics review, mobile interaction observation
Participant count5-8 per round (faculty are harder to recruit and more variable in their approaches)10-15 per round (students are easier to recruit but more variable in tech literacy)
Session length30-45 minutes (schedule around teaching and office hours)20-30 minutes (students have shorter attention for research, despite longer attention for coursework)
Recruitment channelFaculty senate, department chairs, teaching and learning centers, academic technology committeesStudent government, campus email lists, student worker pools, in-LMS recruitment banners
Incentive$150-300/hr for faculty. Alternative: teaching release time, professional development credit$25-75 for students. Alternative: dining credits, bookstore gift cards, extra credit (with IRB approval)
Key workflows to testCourse creation, syllabus building, assignment design, grading/feedback, plagiarism review, analytics dashboards, grade exportAssignment submission, grade viewing, content navigation, discussion participation, group project collaboration, mobile access
Cognitive contextMultitasking across teaching, research, service, and administrative duties. Software is one of many demandsJuggling 4-6 courses simultaneously. Each course may use different tools differently
Technology attitudeRanges from early adopters to active resistors. Often skeptical of tools that change their established workflowGenerally comfortable with technology but frustrated by products that do not work on mobile or require excessive clicks
Success metricTime saved on administrative tasks (grading, roster management, communication)Time to complete academic tasks (find assignment, submit work, check grade)
Failure modeFaculty abandons the tool and reverts to email, paper, or an older system they knowStudent misses assignments, cannot find information, or uses workarounds (screenshot grades, email instead of portal messaging)
FERPA considerationFaculty handle student records. Research observing grading or roster management involves FERPA-protected dataStudent’s own records are theirs to share, but research involving grades requires FERPA consent
Calendar constraintAvailable during office hours, summer, and sabbaticals. Unavailable during finals, grading periods, and the first week of classesAvailable during the semester. Unavailable during finals, midterms, and breaks
Research environmentFaculty office or remote (they work alone). Classroom observation for teaching interactionsLibrary, dorm, student center, or remote (they work everywhere on every device)

How to research faculty workflows

The pedagogical fit challenge

Faculty do not evaluate edtech by usability alone. They evaluate by pedagogical fit: does this tool support the way I teach? A beautifully designed LMS that forces a specific pedagogical approach (e.g., linear module progression) will be rejected by faculty who teach through discussion, project-based learning, or flipped classroom methods.

Research questions for pedagogical fit:

  • “Walk me through how you structured your last course. How does this tool support or conflict with that structure?”
  • “If you could design the perfect tool for your teaching approach, what would it do differently?”
  • “Have you ever abandoned a technology because it did not fit how you teach? What happened?”

Faculty usability testing scenarios

ScenarioWhat it testsKey metric
”Create a new course shell and set up the first week of content”Course creation workflow, content upload, organizationTime to complete, errors, satisfaction
”Create an assignment with a rubric, due date, and submission type”Assignment builder, rubric tool, settings comprehensionTime, rubric accuracy, settings confusion rate
”Grade 10 student submissions and provide feedback on each”Grading workflow, inline feedback, rubric application, bulk actionsTime per submission, feedback quality, fatigue indicators
”Check which students have not submitted the assignment and send them a reminder”Analytics/roster integration, communication toolsSteps to identify non-submitters, message creation time
”Export final grades to your institution’s student information system”Grade export, SIS integration, format compatibilityExport success, data accuracy, error handling
”Set up a discussion board with specific participation requirements”Discussion configuration, participation tracking, moderation toolsSetup time, rubric integration, monitoring usability

The “grading marathon” test

Grading is the faculty workflow with the highest time investment and the most frustration. Test grading with realistic volume: 25-30 submissions, not 3-5. Faculty who grade 3 test submissions report the experience is “fine.” Faculty who grade 25 reveal the repetitive-action friction, the feedback-entry pain, and the cognitive fatigue that the interface creates at scale.

How to research student experiences

Student research adaptations

Mobile-first testing. 67% of college students primarily access their LMS on mobile devices (EDUCAUSE 2024). If you only test on desktop, you miss the majority experience. Test the same tasks on both mobile and desktop and compare completion rates and satisfaction.

Multi-course context. Students juggle 4-6 courses, each potentially using the LMS differently. Research must account for this: “You have 5 courses. Show me how you check what is due this week across all of them.” This cross-course navigation is where many LMS interfaces fail.

Peer influence on adoption. Students adopt tools that their peers use. Research how students learn about and share tool tips: “Did anyone show you a faster way to do this? What was it?” Peer-discovered workarounds reveal both product gaps and organic adoption patterns.

Student usability testing scenarios

ScenarioWhat it testsKey metric
”Find what assignments are due this week across all your courses”Cross-course dashboard, notification system, calendar integrationTime to find, accuracy, mobile vs desktop comparison
”Submit your essay for [course]. Include the file and any required comments”Submission workflow, file upload, confirmation claritySteps to complete, error rate, “did it submit?” confidence
”Check your grade on the last exam and read the instructor’s feedback”Grade access, feedback visibility, grade calculation comprehensionTime to find grade, time to find feedback, comprehension of scoring
”Post a response to the discussion board and reply to two classmates”Discussion interface, reply threading, formatting toolsTime, willingness to engage (do they write more or less than required?), mobile usability
”Find the recording of the lecture you missed last Tuesday”Content organization, search, media playbackTime to find, playback quality, navigation efficiency
”Set up notifications so you know when a grade is posted”Notification settings, channel preferences, customizationCan they find settings? Do they configure successfully?

The “3am submission” scenario

Test the student experience during the moments that matter most: late-night deadline submissions. Give students a scenario: “It is 11:45pm and your assignment is due at midnight. Submit it now.” Observe: does the interface support speed under pressure? What happens if the upload fails? Is the confirmation clear enough that a stressed student at midnight knows it worked?

FERPA compliance for higher ed research

When FERPA applies

FERPA (Family Educational Rights and Privacy Act) protects student education records. Unlike COPPA (which applies to children under 13), FERPA applies to all students at institutions receiving federal funding, regardless of age.

ScenarioFERPA applies?
Observing a student use the LMS with their own data visibleYes if you can see their grades, enrollment, or academic records
Testing an LMS prototype with synthetic student dataNo (no real student records involved)
Interviewing faculty about their grading workflow (no student names mentioned)Generally no (general workflow discussion)
Analyzing LMS usage analytics that include student identifiersYes (student records)
Surveying students about their LMS experience (no grade data collected)Depends: if the survey is linked to identifiable students and asks about academic performance, FERPA may apply

FERPA-compliant research practices

  • Use synthetic student data in all prototypes (fictional names, grades, submissions)
  • If observing real LMS use, obtain student consent for any screen where their records are visible
  • De-identify any analytics data before analysis (remove student names, IDs)
  • Work with the institution’s registrar or FERPA compliance officer to review your protocol
  • Store any education record data on encrypted, access-controlled systems

How to navigate institutional procurement for research

The institutional stakeholder map

StakeholderTheir roleWhat they evaluateHow to engage for research
Chief Information Officer (CIO) / ITTechnical infrastructure, security, integrationArchitecture, SSO, LTI, API, data securityTechnical evaluation sessions, integration testing
Provost / Academic AffairsAcademic quality, faculty supportPedagogical alignment, faculty adoption, learning outcomesFaculty research findings, outcome data
Faculty governance / Academic senateFaculty voice in technology decisionsWhether faculty were consulted, pedagogical flexibilityInclude faculty in research, share findings with governance
Institutional Research (IR)Data and analyticsWhether the tool produces usable analytics for accreditation and reportingAnalytics dashboard testing, data export evaluation
Student affairs / Student governmentStudent voice and experienceWhether students find the tool usable and helpfulStudent research findings, student satisfaction data
Procurement / PurchasingCost, contract terms, vendor riskPricing, compliance, vendor stabilityNot directly involved in UX research, but research findings inform their decision

Research that supports procurement

Institutional procurement decisions take 6-18 months and involve multiple stakeholders. Research findings that address each stakeholder’s concerns accelerate the decision:

  • For IT: “The product integrates with [institution’s SSO] in [time] with [complexity level]”
  • For faculty: “Faculty completed [key task] in [time], a [X%] improvement over the current system”
  • For students: “Students rated the mobile experience [score], compared to [score] for the current LMS”
  • For IR: “The analytics dashboard provides [specific data] that supports [accreditation requirement]“

How to recruit higher ed participants

Faculty recruitment

ChannelApproachYield
Teaching and learning centersPartner with the center director. They know tech-engaged faculty across departmentsHigh quality, pre-filtered for technology interest
Department chairsAsk chairs to forward recruitment to their facultyBroad reach within specific departments
Faculty senate / governancePresent at a senate meeting: “We want faculty input on [product]“Builds legitimacy and reaches faculty who care about governance
Academic technology committeesMembers are already engaged with edtech evaluationPre-qualified participants with strong opinions
CleverX verified panelsPre-screened faculty and academic professionals filtered by institution type and LMS experienceFast recruitment across institutions
Your own user baseIn-product recruitment banner for faculty usersHighest relevance

Faculty incentive benchmarks:

Study typeRateAlternative incentives
30-min interview$150-250Professional development credit, conference registration
45-min usability test$200-300Teaching release time (coordinate with department)
Semester-long diary study$300-500 totalCourse design consultation, premium product access
Classroom observation + debrief$150-250Technology mentorship, featured case study

Student recruitment

ChannelApproachYield
Campus email / LMS announcements”Help improve [product]. 20-min session, $[incentive]“Broad reach, self-selection
Student government partnershipSGA distributes through their channelsTrusted source, engaged students
Student worker poolsWork-study students in IT, library, or academic supportAvailable during work hours, familiar with campus technology
Class announcements (with instructor permission)Brief announcement in large lecture coursesHigh volume, diverse demographics
Social media / campus appsInstagram, campus-specific appsReaches students where they spend time

Student incentive benchmarks:

Study typeRateAlternative incentives
20-min usability test$25-50Dining credits, bookstore gift card
30-min interview$40-75Coffee shop gift card, campus store credit
2-week diary study$75-150 totalBookstore gift card, technology accessory
Focus group (60 min)$50-75Pizza + gift card (students appreciate food)
Extra creditIRB must approveMust offer alternative assignment of equal effort. Extra credit alone is coercive

Administrator / IT recruitment

  • Through institutional partnerships already established for faculty/student research
  • Higher ed technology organizations: EDUCAUSE, Internet2, regional consortia
  • LinkedIn: “CIO” + “university” or “Director of Academic Technology”
  • Incentive: $200-400/hr (institutional leaders have scarce time)

Academic calendar scheduling

PeriodFaculty availabilityStudent availabilityBest for
August-SeptemberSettling in, low availabilityAvailable after first 2 weeksStudent onboarding research
OctoberEstablished routineAvailable (before midterms)Mid-semester workflow research
NovemberPre-finals prep beginsAvailable early Nov, declining late NovDeadline: finish student research before Thanksgiving
DecemberFinals and grading: unavailableFinals: unavailableAvoid entirely
JanuarySpring planning, moderate availabilityAvailable after first weekNew semester onboarding, course setup research
February-MarchPeak teaching rhythmAvailableBest overall window for both populations
AprilEnd-of-semester crunch beginsAvailable early AprilLast chance before year-end
MayFinals and grading: unavailableFinals: unavailableAvoid entirely
June-AugustAvailable (no teaching). Best for extended faculty researchLimited availability (summer enrollment lower)Faculty-focused research, prototype testing, planning for fall

Higher education UX metrics

MetricWhat it measuresFaculty targetStudent target
Course creation timeHow long to set up a new course from scratch<2 hours for basic course shellN/A
Assignment creation timeHow long to create, configure, and publish an assignment<10 minutes per assignmentN/A
Grading time per submissionHow long to grade one student submission with feedback<5 minutes for standard assignmentsN/A
Assignment submission timeHow long to find, complete, and submit an assignmentN/A<3 minutes
Grade lookup timeHow long to find a specific grade and understand itN/A<30 seconds
Cross-course navigationCan users see all courses and deadlines in one view?Dashboard shows all taught coursesDashboard shows all enrolled courses
Mobile task completionCan critical tasks be completed on mobile?Grading on mobile (stretch goal)All student tasks on mobile (mandatory)
System Usability Scale (SUS)Overall usability perception>68 (current higher ed average is 50-60)>72
Platform adoption rateWhat percentage of available features are used?>40% of features within first semester>60% of features within first month

Frequently asked questions

How is higher ed research different from K-12 research?

Four key differences. First, no COPPA (students are adults, FERPA applies instead). Second, faculty are expert users with strong pedagogical opinions, not teachers following a district-mandated curriculum. Third, students are self-directed learners who choose how and when to engage, not children in a structured classroom. Fourth, institutional procurement is driven by faculty governance and IT, not district administration. See our K-12 guide for the K-12 approach.

Do you need IRB approval for higher ed UX research?

Universities have IRBs that review research involving their community members. If you are a product company (not affiliated with the university), you may not need their IRB approval for standard UX testing with synthetic data. However, if you partner with the university for recruitment or if the research involves student records (FERPA), the university’s IRB will likely want to review. Ask your institutional partner’s research compliance office early in the planning process.

Should you test the student experience and faculty experience in the same study?

Never in the same sessions, but ideally in the same study. Test faculty and students separately (different tasks, different methods, different participants), then synthesize findings together to identify where faculty and student needs align and where they conflict. The conflicts are where the most important product decisions live.

How do you research accessibility in higher ed technology?

Higher education institutions must comply with Section 508 and ADA for accessibility. Include participants who use assistive technology (screen readers, magnification, voice input) in both your faculty and student research tracks. Test with WCAG 2.1 AA as the minimum standard. Common higher ed accessibility findings: PDF content is inaccessible, video content lacks captions, and mobile interfaces do not work with screen readers.

What is the most common higher ed UX research finding?

The grading workflow is universally painful. Faculty spend 5-15 minutes per student submission on grading and feedback in most LMS platforms, and much of that time is interface friction (navigation, switching between rubric and submission, saving feedback, moving to the next student) rather than actual assessment work. Improving the grading workflow has the highest ROI for faculty adoption and satisfaction.