Federal Health Agency

Reducing Clinical Risk by Modeling

Information-Seeking Behaviors

Context & Problem

 

Clinicians at a 300,000-person federal health agency rely on point-of-care (PoC) Clinical Information Resource (CIR) tools to make time-sensitive decisions. Organizational leadership teams lacked a clear, evidence-based understanding of:

  • How clinicians seek information under real-world constraints

  • Where breakdowns occur across people, tools, tasks, and environments

  • Which assumptions about “ease of use” or “clinical efficiency” were untested or false

Without this clarity, decisions about usability testing, vendor evaluation, and procurement risked optimizing for the wrong problems.

My Role

 

Senior UX Researcher: I led the research using a Systems Engineering Initiative for Patient Safety (SEIPS) framework to gather information and synthesize findings.

My research informed subsequent testing and evaluation phases.

Research Strategy & Rationale

 

Rather than jumping directly into usability testing, I advocated for a systems-level discovery phase to surface safety and performance risks that are often missed with traditional interface-level product testing. This approach allowed us to:

  • Expose hidden assumptions about clinical workflows

  • Quantify risk before making product or procurement decisions

  • Clarify what should be tested later, and what should not be tested at all

By modeling the entire clinical work system (people, tools, tasks, environments), this research ensured later testing focused on true sources of clinical risk, not surface-level interface issues.

Methods

 

The SEIPS 101 model was used to conduct a human factors analysis of the people, environment, tools, and tasks needed to perform clinical work at a federal health agency.

Analysis of each SEIPS factor was done using a combination of these methods and tools:

  • Discovery research: qualitative review of journal articles, with a focus on how PoC CIRs are designed, organized, used, and evaluated

  • CIR overviews: general overview (i.e. vendor descriptions, independent review publications, and video demos) and live walkthroughs of 4 clinical information resource tools

  • Analytics review: quantitative review of usage data for the existing PoC CIR tool

  • Working sessions: remote brainstorming sessions using Mural, conducted with cross-functional teams and subject matter experts (including licensed physicians) to develop user personas and conduct a task analysis

  • SEIPS 101 templates:

    • People, Environment, Tools, and Tasks (PETT) scan: a review of the barriers and facilitators that either hinder or support people in the work system

    • Task matrix: a description of key tasks needed to find information in a CIR tool

Tradeoffs & Constraints

 

Due to time and resource constraints, in-person site visits and direct observation could not be part of the data collection process. We also had limited access to the digital tools used by clinicians.

To mitigate these constraints, I relied on analytics data to make initial assumptions. I validated these assumptions through informal interviews with subject matter experts on our team and peer-reviewed literature on point-of-care decision-making. Observation was done through remote working sessions and screenshares.

This anecdotal evidence allowed us to model realistic workflows for each part of the SEIPS framework (work systems, work process, and work outcomes).

Key Insights

 

This analysis revealed that clinical risk often stemmed from system interactions, not interface usability alone, including:

  • Mismatches between task urgency and information architecture

  • Cognitive load introduced by tool switching and fragmented workflows

  • Environmental and time pressure factors that distorted “successful” task completion

These findings challenged the assumption that faster search equated to safer clinical decision-making.

Impact

 

This work directly informed:

  • The design of clinical scenarios and task scripts used in usability testing

  • Which workflows were prioritized for evaluation

  • How success metrics were framed (beyond task completion)

These insights changed how stakeholders evaluated success by reframing the problem from “Which tool is easier to use?” to “Where does the system fail clinicians under real-world constraints?”

Strategic Outcome

 

Leadership teams entered subsequent testing phases with a clearer understanding of where risk lived in the system, not just which screens performed better.

The analysis challenged the assumption that faster search equals safer care, revealing how cognitive load, task switching, and context shape clinical outcomes.

Without a system-level human factors analysis, the agency risked selecting or optimizing a tool that performed well in isolation but failed under real clinical conditions.

Next
Next

Informing High-Stakes Procurement Through Comparative Usability Evidence