Higher education user research guide: methods for faculty, students, and institutional platforms
How to conduct user research for higher education technology. Includes a comparison table of faculty vs student research methods, LMS usability testing, FERPA compliance, academic calendar scheduling, and recruiting university participants.
Higher education technology serves two user populations that experience the same product completely differently. Students use the LMS to find assignments, submit work, check grades, and communicate with instructors. Faculty use the same LMS to create courses, design assessments, grade submissions, track plagiarism, manage rosters, and report outcomes. The interface may be identical, but the tasks, mental models, time pressures, and evaluation criteria are so different that researching them requires fundamentally different approaches.
This is the central challenge of higher ed UX research: the same platform must serve a 19-year-old student checking assignments on their phone between classes and a 55-year-old professor designing a 16-week course on their desktop. Research that studies only one group produces products that work for half the audience.
Research links intuitive edtech UX to 20-30% gains in student retention and course completion (EDUCAUSE). The stakes are high, the users are diverse, and the institutional procurement process adds layers of complexity that consumer and standard B2B research does not face.
For K-12 education research (COPPA, teacher gatekeepers, classroom observation), see our K-12 edtech research guide.
Key takeaways
- Faculty and students require separate research tracks with different methods, different recruitment, and different metrics. The comparison table below maps the key differences
- The academic calendar dictates everything. Research must align with semesters: early semester for onboarding research, mid-semester for workflow research, end-of-semester for assessment/grading research. Summer is available for faculty but not for student classroom observation
- FERPA (not COPPA) is the primary compliance framework. Student education records are protected, and research involving grades, enrollment data, or academic performance requires FERPA-compliant protocols
- Faculty are expert users who resist tools that do not respect their pedagogical expertise. Research must explore pedagogical fit, not just usability
- Institutional procurement involves IT, academic affairs, and faculty governance. Research that only tests usability without addressing institutional requirements produces products that test well but never get purchased
Comparison table: faculty vs student research methods
| Dimension | Faculty research | Student research |
|---|---|---|
| Primary research question | ”Does this tool support my teaching and reduce my workload?" | "Can I complete my academic tasks quickly and easily?” |
| Best methods | Interviews (30-45 min), task-based usability testing on course creation/grading, diary studies across a semester | Usability testing (20-30 min), surveys at scale, analytics review, mobile interaction observation |
| Participant count | 5-8 per round (faculty are harder to recruit and more variable in their approaches) | 10-15 per round (students are easier to recruit but more variable in tech literacy) |
| Session length | 30-45 minutes (schedule around teaching and office hours) | 20-30 minutes (students have shorter attention for research, despite longer attention for coursework) |
| Recruitment channel | Faculty senate, department chairs, teaching and learning centers, academic technology committees | Student government, campus email lists, student worker pools, in-LMS recruitment banners |
| Incentive | $150-300/hr for faculty. Alternative: teaching release time, professional development credit | $25-75 for students. Alternative: dining credits, bookstore gift cards, extra credit (with IRB approval) |
| Key workflows to test | Course creation, syllabus building, assignment design, grading/feedback, plagiarism review, analytics dashboards, grade export | Assignment submission, grade viewing, content navigation, discussion participation, group project collaboration, mobile access |
| Cognitive context | Multitasking across teaching, research, service, and administrative duties. Software is one of many demands | Juggling 4-6 courses simultaneously. Each course may use different tools differently |
| Technology attitude | Ranges from early adopters to active resistors. Often skeptical of tools that change their established workflow | Generally comfortable with technology but frustrated by products that do not work on mobile or require excessive clicks |
| Success metric | Time saved on administrative tasks (grading, roster management, communication) | Time to complete academic tasks (find assignment, submit work, check grade) |
| Failure mode | Faculty abandons the tool and reverts to email, paper, or an older system they know | Student misses assignments, cannot find information, or uses workarounds (screenshot grades, email instead of portal messaging) |
| FERPA consideration | Faculty handle student records. Research observing grading or roster management involves FERPA-protected data | Student’s own records are theirs to share, but research involving grades requires FERPA consent |
| Calendar constraint | Available during office hours, summer, and sabbaticals. Unavailable during finals, grading periods, and the first week of classes | Available during the semester. Unavailable during finals, midterms, and breaks |
| Research environment | Faculty office or remote (they work alone). Classroom observation for teaching interactions | Library, dorm, student center, or remote (they work everywhere on every device) |
How to research faculty workflows
The pedagogical fit challenge
Faculty do not evaluate edtech by usability alone. They evaluate by pedagogical fit: does this tool support the way I teach? A beautifully designed LMS that forces a specific pedagogical approach (e.g., linear module progression) will be rejected by faculty who teach through discussion, project-based learning, or flipped classroom methods.
Research questions for pedagogical fit:
- “Walk me through how you structured your last course. How does this tool support or conflict with that structure?”
- “If you could design the perfect tool for your teaching approach, what would it do differently?”
- “Have you ever abandoned a technology because it did not fit how you teach? What happened?”
Faculty usability testing scenarios
| Scenario | What it tests | Key metric |
|---|---|---|
| ”Create a new course shell and set up the first week of content” | Course creation workflow, content upload, organization | Time to complete, errors, satisfaction |
| ”Create an assignment with a rubric, due date, and submission type” | Assignment builder, rubric tool, settings comprehension | Time, rubric accuracy, settings confusion rate |
| ”Grade 10 student submissions and provide feedback on each” | Grading workflow, inline feedback, rubric application, bulk actions | Time per submission, feedback quality, fatigue indicators |
| ”Check which students have not submitted the assignment and send them a reminder” | Analytics/roster integration, communication tools | Steps to identify non-submitters, message creation time |
| ”Export final grades to your institution’s student information system” | Grade export, SIS integration, format compatibility | Export success, data accuracy, error handling |
| ”Set up a discussion board with specific participation requirements” | Discussion configuration, participation tracking, moderation tools | Setup time, rubric integration, monitoring usability |
The “grading marathon” test
Grading is the faculty workflow with the highest time investment and the most frustration. Test grading with realistic volume: 25-30 submissions, not 3-5. Faculty who grade 3 test submissions report the experience is “fine.” Faculty who grade 25 reveal the repetitive-action friction, the feedback-entry pain, and the cognitive fatigue that the interface creates at scale.
How to research student experiences
Student research adaptations
Mobile-first testing. 67% of college students primarily access their LMS on mobile devices (EDUCAUSE 2024). If you only test on desktop, you miss the majority experience. Test the same tasks on both mobile and desktop and compare completion rates and satisfaction.
Multi-course context. Students juggle 4-6 courses, each potentially using the LMS differently. Research must account for this: “You have 5 courses. Show me how you check what is due this week across all of them.” This cross-course navigation is where many LMS interfaces fail.
Peer influence on adoption. Students adopt tools that their peers use. Research how students learn about and share tool tips: “Did anyone show you a faster way to do this? What was it?” Peer-discovered workarounds reveal both product gaps and organic adoption patterns.
Student usability testing scenarios
| Scenario | What it tests | Key metric |
|---|---|---|
| ”Find what assignments are due this week across all your courses” | Cross-course dashboard, notification system, calendar integration | Time to find, accuracy, mobile vs desktop comparison |
| ”Submit your essay for [course]. Include the file and any required comments” | Submission workflow, file upload, confirmation clarity | Steps to complete, error rate, “did it submit?” confidence |
| ”Check your grade on the last exam and read the instructor’s feedback” | Grade access, feedback visibility, grade calculation comprehension | Time to find grade, time to find feedback, comprehension of scoring |
| ”Post a response to the discussion board and reply to two classmates” | Discussion interface, reply threading, formatting tools | Time, willingness to engage (do they write more or less than required?), mobile usability |
| ”Find the recording of the lecture you missed last Tuesday” | Content organization, search, media playback | Time to find, playback quality, navigation efficiency |
| ”Set up notifications so you know when a grade is posted” | Notification settings, channel preferences, customization | Can they find settings? Do they configure successfully? |
The “3am submission” scenario
Test the student experience during the moments that matter most: late-night deadline submissions. Give students a scenario: “It is 11:45pm and your assignment is due at midnight. Submit it now.” Observe: does the interface support speed under pressure? What happens if the upload fails? Is the confirmation clear enough that a stressed student at midnight knows it worked?
FERPA compliance for higher ed research
When FERPA applies
FERPA (Family Educational Rights and Privacy Act) protects student education records. Unlike COPPA (which applies to children under 13), FERPA applies to all students at institutions receiving federal funding, regardless of age.
| Scenario | FERPA applies? |
|---|---|
| Observing a student use the LMS with their own data visible | Yes if you can see their grades, enrollment, or academic records |
| Testing an LMS prototype with synthetic student data | No (no real student records involved) |
| Interviewing faculty about their grading workflow (no student names mentioned) | Generally no (general workflow discussion) |
| Analyzing LMS usage analytics that include student identifiers | Yes (student records) |
| Surveying students about their LMS experience (no grade data collected) | Depends: if the survey is linked to identifiable students and asks about academic performance, FERPA may apply |
FERPA-compliant research practices
- Use synthetic student data in all prototypes (fictional names, grades, submissions)
- If observing real LMS use, obtain student consent for any screen where their records are visible
- De-identify any analytics data before analysis (remove student names, IDs)
- Work with the institution’s registrar or FERPA compliance officer to review your protocol
- Store any education record data on encrypted, access-controlled systems
How to navigate institutional procurement for research
The institutional stakeholder map
| Stakeholder | Their role | What they evaluate | How to engage for research |
|---|---|---|---|
| Chief Information Officer (CIO) / IT | Technical infrastructure, security, integration | Architecture, SSO, LTI, API, data security | Technical evaluation sessions, integration testing |
| Provost / Academic Affairs | Academic quality, faculty support | Pedagogical alignment, faculty adoption, learning outcomes | Faculty research findings, outcome data |
| Faculty governance / Academic senate | Faculty voice in technology decisions | Whether faculty were consulted, pedagogical flexibility | Include faculty in research, share findings with governance |
| Institutional Research (IR) | Data and analytics | Whether the tool produces usable analytics for accreditation and reporting | Analytics dashboard testing, data export evaluation |
| Student affairs / Student government | Student voice and experience | Whether students find the tool usable and helpful | Student research findings, student satisfaction data |
| Procurement / Purchasing | Cost, contract terms, vendor risk | Pricing, compliance, vendor stability | Not directly involved in UX research, but research findings inform their decision |
Research that supports procurement
Institutional procurement decisions take 6-18 months and involve multiple stakeholders. Research findings that address each stakeholder’s concerns accelerate the decision:
- For IT: “The product integrates with [institution’s SSO] in [time] with [complexity level]”
- For faculty: “Faculty completed [key task] in [time], a [X%] improvement over the current system”
- For students: “Students rated the mobile experience [score], compared to [score] for the current LMS”
- For IR: “The analytics dashboard provides [specific data] that supports [accreditation requirement]“
How to recruit higher ed participants
Faculty recruitment
| Channel | Approach | Yield |
|---|---|---|
| Teaching and learning centers | Partner with the center director. They know tech-engaged faculty across departments | High quality, pre-filtered for technology interest |
| Department chairs | Ask chairs to forward recruitment to their faculty | Broad reach within specific departments |
| Faculty senate / governance | Present at a senate meeting: “We want faculty input on [product]“ | Builds legitimacy and reaches faculty who care about governance |
| Academic technology committees | Members are already engaged with edtech evaluation | Pre-qualified participants with strong opinions |
| CleverX verified panels | Pre-screened faculty and academic professionals filtered by institution type and LMS experience | Fast recruitment across institutions |
| Your own user base | In-product recruitment banner for faculty users | Highest relevance |
Faculty incentive benchmarks:
| Study type | Rate | Alternative incentives |
|---|---|---|
| 30-min interview | $150-250 | Professional development credit, conference registration |
| 45-min usability test | $200-300 | Teaching release time (coordinate with department) |
| Semester-long diary study | $300-500 total | Course design consultation, premium product access |
| Classroom observation + debrief | $150-250 | Technology mentorship, featured case study |
Student recruitment
| Channel | Approach | Yield |
|---|---|---|
| Campus email / LMS announcements | ”Help improve [product]. 20-min session, $[incentive]“ | Broad reach, self-selection |
| Student government partnership | SGA distributes through their channels | Trusted source, engaged students |
| Student worker pools | Work-study students in IT, library, or academic support | Available during work hours, familiar with campus technology |
| Class announcements (with instructor permission) | Brief announcement in large lecture courses | High volume, diverse demographics |
| Social media / campus apps | Instagram, campus-specific apps | Reaches students where they spend time |
Student incentive benchmarks:
| Study type | Rate | Alternative incentives |
|---|---|---|
| 20-min usability test | $25-50 | Dining credits, bookstore gift card |
| 30-min interview | $40-75 | Coffee shop gift card, campus store credit |
| 2-week diary study | $75-150 total | Bookstore gift card, technology accessory |
| Focus group (60 min) | $50-75 | Pizza + gift card (students appreciate food) |
| Extra credit | IRB must approve | Must offer alternative assignment of equal effort. Extra credit alone is coercive |
Administrator / IT recruitment
- Through institutional partnerships already established for faculty/student research
- Higher ed technology organizations: EDUCAUSE, Internet2, regional consortia
- LinkedIn: “CIO” + “university” or “Director of Academic Technology”
- Incentive: $200-400/hr (institutional leaders have scarce time)
Academic calendar scheduling
| Period | Faculty availability | Student availability | Best for |
|---|---|---|---|
| August-September | Settling in, low availability | Available after first 2 weeks | Student onboarding research |
| October | Established routine | Available (before midterms) | Mid-semester workflow research |
| November | Pre-finals prep begins | Available early Nov, declining late Nov | Deadline: finish student research before Thanksgiving |
| December | Finals and grading: unavailable | Finals: unavailable | Avoid entirely |
| January | Spring planning, moderate availability | Available after first week | New semester onboarding, course setup research |
| February-March | Peak teaching rhythm | Available | Best overall window for both populations |
| April | End-of-semester crunch begins | Available early April | Last chance before year-end |
| May | Finals and grading: unavailable | Finals: unavailable | Avoid entirely |
| June-August | Available (no teaching). Best for extended faculty research | Limited availability (summer enrollment lower) | Faculty-focused research, prototype testing, planning for fall |
Higher education UX metrics
| Metric | What it measures | Faculty target | Student target |
|---|---|---|---|
| Course creation time | How long to set up a new course from scratch | <2 hours for basic course shell | N/A |
| Assignment creation time | How long to create, configure, and publish an assignment | <10 minutes per assignment | N/A |
| Grading time per submission | How long to grade one student submission with feedback | <5 minutes for standard assignments | N/A |
| Assignment submission time | How long to find, complete, and submit an assignment | N/A | <3 minutes |
| Grade lookup time | How long to find a specific grade and understand it | N/A | <30 seconds |
| Cross-course navigation | Can users see all courses and deadlines in one view? | Dashboard shows all taught courses | Dashboard shows all enrolled courses |
| Mobile task completion | Can critical tasks be completed on mobile? | Grading on mobile (stretch goal) | All student tasks on mobile (mandatory) |
| System Usability Scale (SUS) | Overall usability perception | >68 (current higher ed average is 50-60) | >72 |
| Platform adoption rate | What percentage of available features are used? | >40% of features within first semester | >60% of features within first month |
Frequently asked questions
How is higher ed research different from K-12 research?
Four key differences. First, no COPPA (students are adults, FERPA applies instead). Second, faculty are expert users with strong pedagogical opinions, not teachers following a district-mandated curriculum. Third, students are self-directed learners who choose how and when to engage, not children in a structured classroom. Fourth, institutional procurement is driven by faculty governance and IT, not district administration. See our K-12 guide for the K-12 approach.
Do you need IRB approval for higher ed UX research?
Universities have IRBs that review research involving their community members. If you are a product company (not affiliated with the university), you may not need their IRB approval for standard UX testing with synthetic data. However, if you partner with the university for recruitment or if the research involves student records (FERPA), the university’s IRB will likely want to review. Ask your institutional partner’s research compliance office early in the planning process.
Should you test the student experience and faculty experience in the same study?
Never in the same sessions, but ideally in the same study. Test faculty and students separately (different tasks, different methods, different participants), then synthesize findings together to identify where faculty and student needs align and where they conflict. The conflicts are where the most important product decisions live.
How do you research accessibility in higher ed technology?
Higher education institutions must comply with Section 508 and ADA for accessibility. Include participants who use assistive technology (screen readers, magnification, voice input) in both your faculty and student research tracks. Test with WCAG 2.1 AA as the minimum standard. Common higher ed accessibility findings: PDF content is inaccessible, video content lacks captions, and mobile interfaces do not work with screen readers.
What is the most common higher ed UX research finding?
The grading workflow is universally painful. Faculty spend 5-15 minutes per student submission on grading and feedback in most LMS platforms, and much of that time is interface friction (navigation, switching between rubric and submission, saving feedback, moving to the next student) rather than actual assessment work. Improving the grading workflow has the highest ROI for faculty adoption and satisfaction.