Learning management system user research: a methods comparison for product and UX teams

How to conduct user research for learning management systems. Includes a self-contained methods comparison for LMS research, covering course authoring, gradebook, assignment workflows, discussion UX, mobile parity, LTI integration, and accessibility testing.

Learning management system user research: a methods comparison for product and UX teams

Learning management systems are used by 98% of US higher education institutions and an estimated 73% of K-12 districts. Canvas, Blackboard, Moodle, Google Classroom, Schoology, and D2L collectively serve hundreds of millions of users. Yet LMS usability is consistently rated among the lowest of any B2B software category. Faculty report spending 30-50% more time than necessary on routine tasks like grading and course setup because of interface friction, and students report that LMS navigation confusion directly affects their academic performance.

The problem is not a lack of features. Modern LMS platforms have thousands of features. The problem is that researching LMS usability requires methods adapted for a product that serves fundamentally different user types (instructors, students, administrators, instructional designers) performing fundamentally different tasks (teaching, learning, managing, designing) within fundamentally different constraints (class periods, semesters, academic calendars, institutional policies).

This guide provides the self-contained methods comparison that LMS product teams need: which research method to use for which LMS component, with which users, at which point in the academic cycle.

For higher education research broadly (faculty vs. student research, institutional procurement, FERPA), see our higher education guide. For K-12 edtech (COPPA, teacher gatekeepers, classroom observation), see our K-12 guide.

Key takeaways

  • LMS research requires component-by-component testing because the course authoring experience, gradebook experience, student submission experience, and discussion experience are effectively separate products sharing a navigation shell
  • The methods comparison table below maps 8 research methods to 7 LMS components so teams can build targeted research plans rather than testing “the LMS” generically
  • Mobile parity testing is mandatory. Students access the LMS primarily on mobile, but most LMS features were designed for desktop. The gap between mobile and desktop experience is where the most critical usability failures live
  • LMS research must happen during active academic use (not summer, not between semesters) because the experience changes dramatically under the pressure of real courses with real deadlines
  • Instructor and student research must be separate tracks. The same LMS screen (an assignment page) is a creation tool for instructors and a submission tool for students. Testing both perspectives in the same session produces conflicting data

Self-contained methods comparison for LMS research

MethodCourse authoringGradebookAssignment workflowDiscussion/collabNavigation/IAMobile experienceAccessibility
Usability testing (instructor)Best method. Test course creation, content upload, module organizationBest method. Test grading 25+ submissions, rubric application, grade exportTest assignment creation, rubric building, settings configurationTest discussion setup, moderation, participation gradingTest finding specific features across a full courseTest grading on tablet/phoneTest with screen reader for instructor workflows
Usability testing (student)N/A (students do not author courses)Test grade viewing, feedback comprehension, GPA calculationBest method. Test submission flow, file upload, late submission handlingTest posting, replying, threading, finding unreadBest method. Test finding assignments, grades, content across coursesBest method. Test all student tasks on phoneTest with assistive technology for student workflows
Contextual inquiry (classroom)Observe instructor preparing before classObserve real-time grade entry during classObserve students submitting during classObserve discussion participation during/after classObserve real navigation patterns in contextObserve mobile usage during lectures, between classesObserve assistive technology use in real academic context
InterviewsExplore pedagogical fit, customization needs, migration painExplore grading philosophy, reporting needs, export frustrationsExplore assignment design approach, rubric preferencesExplore discussion pedagogy, engagement strategiesExplore mental model of course organizationExplore mobile usage patterns and expectationsExplore accommodation workflows, institutional requirements
Diary studies (semester-long)Track course building over weeks (content adds, reorganization)Best method for grading. Track grading patterns across a semesterTrack assignment lifecycle from creation to grading to returnTrack discussion engagement patterns over the semesterTrack navigation frustrations that accumulate over timeTrack when and where mobile is used throughout the dayTrack accessibility barriers encountered over time
Surveys (end of semester)Satisfaction with authoring tools, feature requestsSatisfaction with gradebook, time spent gradingSatisfaction with submission process, clarity of deadlinesSatisfaction with discussion tools, participation qualitySatisfaction with navigation, findabilityMobile satisfaction, feature parity perceptionAccessibility satisfaction, unmet accommodation needs
Card sortingTest course content organization modelsTest gradebook column organization and filteringTest assignment categorization and displayN/ABest method. Test overall LMS information architectureTest mobile navigation structureN/A
Analytics reviewTrack which authoring features are used vs. ignoredTrack grading time per submission, bulk vs. individual patternsTrack submission timing (how close to deadline), error ratesTrack discussion post frequency, reply depth, engagement decayTrack page views, navigation paths, search queriesTrack mobile vs. desktop usage ratios per featureTrack assistive technology usage patterns

How to read this table

Each cell indicates how well that research method addresses that LMS component. “Best method” means this is the primary method for this component. Blank or descriptive cells indicate the method works but is supplementary. Build your research plan by reading down each column: for any LMS component you want to research, the column shows which methods to prioritize and what to focus on.

How to test LMS course authoring

The course creation marathon test

Course authoring is the instructor workflow with the highest time investment and the steepest learning curve. Test with realistic complexity.

Protocol: Ask an instructor to create a course from scratch that mirrors a real course they teach.

TaskWhat it testsTarget
”Create the course shell and set the semester dates”Initial setup, settings comprehension<5 minutes
”Create 3 modules for the first 3 weeks of content”Module structure, organization model<15 minutes
”Upload a syllabus PDF, 2 lecture slides, and a video link”Content upload, file handling, external link integration<10 minutes
”Create a quiz with 5 questions (multiple choice, short answer, matching)“Assessment builder, question type variety<15 minutes
”Set up the gradebook with weighted categories”Gradebook configuration, weighting logic<10 minutes
”Publish the course so students can access it”Publication workflow, visibility settings<2 minutes

Total target: A competent instructor should be able to create a basic course shell in under 60 minutes. If it takes longer, the authoring tools have friction that will deter adoption.

Content migration testing

The most painful LMS experience: migrating a course from one LMS to another (or from one semester to the next). Test:

  • “Copy your Fall course to create your Spring section. What transferred correctly? What did you have to redo?”
  • “Import content from [previous LMS]. What worked? What broke?”

Migration friction is the #1 barrier to LMS switching. Research that identifies and reduces migration pain has direct commercial value for competitive displacement.

How to test the LMS gradebook

The “grading 25” protocol

Never test grading with 3-5 submissions. Test with 25-30 to capture the repetitive-action fatigue, the interface friction that compounds over dozens of interactions, and the workarounds instructors develop to speed up the process.

Protocol:

  1. Load 25 student submissions (varying quality, some incomplete, some late)
  2. Provide a rubric with 4-5 criteria
  3. Ask the instructor to grade all 25 with feedback on each
  4. Measure: time per submission, time trend (does it get faster or slower?), feedback quality trend (does feedback get shorter as fatigue increases?), errors (wrong grade entered, wrong student, rubric misapplied)

What this reveals: The per-submission grading time at submission #25 is the real grading time, not the per-submission time at submission #3. Most LMS gradebook usability problems only emerge at scale.

Grade export testing

“Export your final grades in a format your registrar accepts.” This single task reveals integration quality, file format compatibility, and the instructor’s confidence that the exported data is correct. If the instructor opens the export in Excel to verify before submitting to the registrar, the gradebook has a trust problem.

How to test the student submission experience

The deadline pressure test

Students submit assignments close to deadlines. Test the submission experience under time pressure:

“Your assignment is due in 5 minutes. Submit this file and confirm it went through.”

Observe:

  • Can the student find the assignment quickly? (Navigation under pressure)
  • Is the file upload fast and reliable? (Technical performance)
  • Is the submission confirmation clear? (“Did it actually submit?”)
  • What happens if the upload fails at 11:58pm? (Error handling under deadline)
  • Can the student see the submission timestamp? (Proof of on-time submission)

Multi-format submission testing

Test with the file types students actually use:

  • Word document (.docx)
  • PDF
  • Google Docs link
  • Photo of handwritten work (phone camera upload)
  • Video recording
  • Code file

Each format has different upload, preview, and instructor-review implications. Test the full cycle: student uploads, instructor views, instructor grades.

How to test LMS discussion UX

The engagement decay problem

LMS discussion participation follows a predictable pattern: high in week 1, declining by week 4, minimal by week 10. Research must investigate whether this decay is pedagogical (students lose interest in the topic) or UX-driven (the discussion interface makes participation tedious).

Research approach:

  • Diary study: track discussion engagement over a full semester. Ask weekly: “Did you post in discussions this week? If not, why?”
  • Usability test: “Post a response and reply to two classmates.” Measure time and effort. Is the effort proportional to the value?
  • Analytics: track post length, reply depth, and time spent on discussion pages over the semester

UX-driven decay indicators:

  • Posts get shorter over the semester (writing fatigue from the interface, not the topic)
  • Students stop reading peer posts (threading or navigation makes finding new content difficult)
  • Students copy-paste the minimum required rather than engaging (the interface rewards completion over quality)

How to test LMS mobile experience

The mobile parity audit

Map every critical student task and test whether it works on mobile:

TaskDesktop experienceMobile experienceParity?
View upcoming assignments across coursesDashboard or calendar viewApp notification + calendar (if available)Test both
Submit a file assignmentFile browser uploadCamera capture or file picker from phone storageOften broken on mobile
Read and respond to a discussionThreaded view with formattingCondensed view, often without formatting toolsTest mobile-specific
View grades and feedbackFull gradebook viewSimplified grade list, feedback often truncatedTest comprehension
Watch embedded video contentIn-browser playerIn-app player (may require separate app)Test playback quality
Take a quizFull quiz interfaceMobile quiz (may have layout issues on small screens)Test for touch-target errors
Message instructorPortal messagingPush notification + in-app messagingTest notification reliability

The “between classes” test

Students use the LMS in 5-10 minute bursts between classes, on the bus, or waiting in line. Test these micro-sessions:

“You have 5 minutes between classes. Check if anything is due today, read the feedback on your last assignment, and reply to one discussion post.”

Can the student accomplish all three in 5 minutes on their phone? If not, which task fails and why?

How to test LMS accessibility

Accessibility testing requirements

Educational institutions must comply with Section 508 (federal), ADA (all public institutions), and WCAG 2.1 AA. LMS accessibility testing is not optional.

Test with real assistive technology users:

  • Screen reader users (JAWS, NVDA, VoiceOver) for navigation, content reading, assignment submission
  • Magnification users for content readability and layout stability at 200-400% zoom
  • Keyboard-only users for all workflows without a mouse
  • Voice input users for form completion and navigation

Common LMS accessibility findings:

  • Uploaded PDF content is almost never accessible (no alt text, no heading structure, no reading order)
  • Quiz interfaces have focus management problems (keyboard users lose their place)
  • Discussion threading is incomprehensible to screen readers
  • Drag-and-drop course organization has no keyboard alternative
  • Mobile apps have different (often worse) accessibility than the web interface

How to recruit LMS research participants

Instructor recruitment

ChannelBest for
Teaching and learning centersFaculty who engage with pedagogy and technology
LMS admin teamsInstructors the admin team identifies as power users or as struggling users
Instructional design teamsInstructional designers who build courses for faculty
CleverX verified panelsFaculty filtered by LMS platform, institution type, and discipline
Your own LMS user baseIn-product recruitment for existing instructors

Incentive: $150-300/hr for faculty. Alternatives: professional development credit, conference registration, teaching technology consultation.

Student recruitment

ChannelBest for
Campus email / LMS announcementsBroad reach across enrolled students
Student governmentTrusted channel, engaged students
Student worker pools (library, IT help desk)Available, familiar with campus technology
Class announcementsHigh volume through large enrollment courses

Incentive: $25-75 for 20-30 min sessions. Alternatives: dining credits, bookstore gift cards.

Scheduling

Best windows: Weeks 3-6 and weeks 9-12 of a 16-week semester. Avoid: first 2 weeks (settling in), midterms, finals, and breaks.

LMS-specific usability metrics

MetricWhat it measuresInstructor targetStudent target
Course creation timeTime to build a basic course from scratch<60 minutes for basic shellN/A
Grading time per submissionTime to grade one submission with feedback<5 minutes for standard assignmentsN/A
Assignment submission timeTime from “I need to submit” to confirmed submissionN/A<3 minutes
Cross-course navigationFinding information across multiple coursesN/A<30 seconds to see “what is due”
Mobile task completion rateCan critical tasks be completed on mobile?Stretch goal for gradingMandatory for all student tasks
Feature discovery rateWhat percentage of available features are used?>40% within first semester>60% within first month
Discussion engagement rateDo discussions show sustained participation?Sustained posting through week 12+Posts maintain quality through week 12+
Grade export accuracyDoes exported grade data match the gradebook?100% (any discrepancy is unacceptable)N/A
Accessibility complianceWCAG 2.1 AA conformance across all workflowsAll instructor workflows accessibleAll student workflows accessible

Frequently asked questions

Should you test with Canvas users, Blackboard users, or both?

Segment by platform. Canvas users (generally more modern UX expectations, younger institutions) and Blackboard users (often legacy installations, larger institutions) have different baselines and different pain points. If your product competes with both, run separate research tracks. If your product is an LMS, competitive benchmarking (same tasks across platforms) produces the most actionable positioning data.

How do you test LMS integrations (LTI)?

LTI (Learning Tools Interoperability) is the standard for embedding third-party tools in an LMS. Test the integration experience from both sides: the instructor who enables and configures the integration, and the student who uses the integrated tool within the LMS. Common LTI usability failures: the tool opens in a new window (breaking the LMS context), grades do not sync back to the gradebook, and SSO fails requiring a separate login.

How do you research LMS products that institutions mandate?

Unlike consumer products, students and faculty cannot choose their LMS. This changes the research dynamic: dissatisfied users cannot churn (they must use it), so satisfaction surveys undercount frustration. Focus on efficiency metrics (time on task, error rates) and workaround analysis (what do users do instead of using the LMS feature?) rather than satisfaction scores. The workarounds reveal the product’s real competition: not another LMS, but email, Google Docs, and paper.

What is the most common LMS usability finding?

The gradebook. In every LMS usability study, the gradebook generates the most friction, the most errors, and the most workarounds. Instructors do not trust the gradebook’s calculations, so they export to Excel to verify. Students do not understand how their grade is calculated, so they email the instructor to ask. Both behaviors are product failures that research consistently identifies and that gradebook redesign consistently improves.