Federal Health Agency
From Risk to Readiness: Using a Heuristic Evaluation to Strengthen Online Course Materials
Context & Problem
A 300,000-person federal health agency developed a course about Clinical Decision Support (CDS) evaluation to be offered via the Blackboard eLearning platform. The course had been tested with a preliminary learner group, but leadership teams needed expert feedback on:
Usability of the course interface and navigation
Effectiveness of course content in conveying CDS evaluation concepts (with a strong emphasis on quality improvement, or QI)
Without addressing usability and instructional design issues, the course risked:
Learner confusion about scope (QI versus broader evaluation types)
Lower engagement and completion
Weaker knowledge transfer for stakeholders expected to participate in CDS evaluations
My Role
Senior UX Researcher: I partnered with an evaluation team to develop the research strategy, assess the end-to-end learning experience, synthesize findings, and deliver a set of recommendations to improve course clarity, usability, and instructional effectiveness.
The evaluation team included licensed independent practitioners, clinical informaticists, and usability engineers (with MD and PhD credentials represented) who reviewed both PDFs and the Blackboard course experience.
Research Strategy & Rationale
Instead of waiting for learner feedback, we used a heuristic evaluation to quickly and systematically detect usability and instructional design breakdowns using established criteria from:
Nielsen Norman Group’s 10 Usability Heuristics to assess general interface usability
9 educational and learner-centered instructional design criteria to assess “learning with software” guidelines
This approach provided a fast, expert-driven risk assessment of course issues that could undermine comprehension, motivation, and navigation before broader rollout to more clinicians.
Methods
Heuristic Evaluation
Applied 19 heuristics across all learning modules to assess general usability and learner-centered instructional design
Used a 3-level severity scale to prioritize impact: mild, moderate, major
Expert evaluators reviewed materials independently, then reconciled observations to reduce confirmation bias
Constraints & Tradeoffs
Course improvements were constrained by platform context (Blackboard software and associated layout constraints).
We balanced breadth (full course review across 4 modules) with prioritization (severity scoring and frequency ranking) to keep results actionable.
Key Findings & What We Learned
Scale and severity of issues: we documented 190 total observations across 19 heuristics:
47 mild (25%)
123 moderate (65%)
20 major (10%)
The biggest problem areas: we focused on heuristics with major severity issues and heuristics with the highest volume of issues (top 5 heuristics accounted for 66% of all observations)
Two heuristics stood out as both high severity and high frequency:
Match between system and the real world (Heuristic #2): 48 observations (25.26%)
Context meaningful to domain and learner (Heuristic #17): 20 observations (10.53%)
Themes that mattered most to outcomes: across the findings, the biggest risks to learning and usability were:
Insufficient real-world clinical context (course examples and visuals didn’t consistently map to clinical practice)
Ambiguity in scope and framing (course framing and title suggested broader CDS evaluation, creating confusion with quality improvement (QI) content)
Visibility of system status gaps (missing time expectations, unclear progress/effort requirements, and unclear knowledge-check pass/fail feedback)
Cognitive load from design and content density
Impact
We delivered a prioritized improvement roadmap emphasizing a course-wide, severity-ranked set of findings to target the most consequential usability and instructional design issues first.
The recommendations specifically targeted confusion and usability barriers that could limit course completion, comprehension, and adoption, supporting the agency’s goal of preparing stakeholders to participate in CDS evaluations using a quality improvement (QI) perspective.
Strategic Outcome
Improving course usability and learning effectiveness increases the likelihood that more clinicians will pursue and retain CDS evaluation capability, contributing to better clinical decision-making and patient care over time.