

Finding Clarity in an Complex Healthcare Workflow
MedLaunch asked our research consulting team to evaluate the usability of its Quality Management page, a complex healthcare workflow tied to compliance-heavy tasks. The story began with a simple question: could hospital quality staff clearly understand the Quality workflow?
Prologue: A Simple Question With a Complicated Answer
What began as a usability study became a deeper investigation into how hospital quality staff make sense of a complex workflow.
MedLaunch’s Quality Management page sits inside a larger hospital compliance workflow, where survey findings and rounding notes become tracked items, corrective action plans, follow-up tasks, and recurrence checks.
Before we could test the experience, our team had to understand the workflow behind it, the users responsible for managing it, and the product constraints that would shape what we could reliably study.
Research Question
Could hospital quality staff make sense of MedLaunch's Quality Management workflow?
Initial Objective
Evaluate the usability of MedLaunch's Quality Management page and identify opportunities to make the workflow clearer and easier to use.
Chapter 1: Aligning on What Needed to Be Uncovered
Meeting with MedLaunch to understand the users, workflow, and product goals behind the Quality Management page.
The project kicked off with a facilitated meeting where our team presented initial assumptions, secondary research, and clarifying questions to MedLaunch. They walked us through the Quality Management page, a compliance workflow for documenting findings, managing corrective actions, and tracking issues through resolution.
Since the healthcare compliance space was new to us, the priority wasn't testing yet. It was clarifying our assumptions, resolving confusion, and aligning on what MedLaunch actually needed from the study. That conversation transformed a broad product space into a focused research direction.
Chapter 2: Reading the Interface Before Testing It
Reviewing Figma mockups and written documentation to understand what the Quality Management page was asking users to do.
MedLaunch provided Figma mockups and written documentation. No walkthrough or platform access, so the first step was mapping the workflow from static materials.
Before writing a single task, I needed to understand how findings connected to tickets, where forms fed into tasks, and how recurrence tracking and Effectiveness Monitoring fit together. Finding the seams before we could test them.

Reviewing the static materials helped us identify where the workflow concentrated complexity before we began writing tasks.
Chapter 3: The Version We Prepared For Wasn’t the One Users Would See
Discovering that the Figma mockups and testing environment contained different interface details.
Two weeks in, MedLaunch gave us access to a testing account. Once inside, we discovered the Quality Management page was still under development. The version we had studied didn't fully match what users would actually test.
Text Description
Text Description
Text Description
Text Description
To close the gap, we connected with MedLaunch's UX team. That conversation reframed the study from a broad interface exploration into a focused formative evaluation of the workflows users could meaningfully engage with.
Chapter 4: Choosing Control Over Realism
Deciding how to test the workflow while protecting sensitive healthcare data.
Once we understood the testing environment, our team had another decision to make: whether participants should use their own MedLaunch accounts or a controlled testing account. Real accounts would have reflected more authentic workflows, but they also introduced sensitive healthcare data and limited what we could record or compare across sessions.
We chose the testing account because it gave every participant the same starting point and helped us protect privacy while keeping the study conditions consistent. The tradeoff was that the account was not fully functional, so we had to be careful about which tasks we included and how we interpreted the results.
Chapter 5: Reframing the Study Around What We Could Trust
Turning product ambiguity into focused tasks for formative usability testing.
Once we understood that the product was still evolving, the next challenge was deciding what kind of study would produce reliable insights. A broad exploratory approach no longer felt strong enough because not every part of the interface was stable or fully testable.
We reframed the study around the workflows we could observe meaningfully: whether users could navigate, understand, and complete core actions on the Quality Management page. This helped us move from general reactions to more useful evidence about discoverability, comprehension, and task completion.
Chapter 6: Testing for Clarity in Core Workflows
Running moderated sessions to see whether users could navigate, understand, and complete key Quality Management tasks.
With the study reframed, we moved from planning into observation. Across eight moderated sessions, participants worked through realistic Quality Management scenarios while thinking aloud. The goal was not simply to see whether they could finish a task, but to understand where the workflow lost clarity.
We observed how participants handled the core Quality Management workflow, from creating entries to finding Effectiveness Monitoring, managing linked tickets, and reporting events. Beyond task completion, we looked for the specific friction behind each outcome: hidden actions, unclear labels, confusing merge direction, repeated fields, and moments where users needed moderator guidance.
Chapter 7: Making Sense of the Patterns
Synthesizing session notes, task outcomes, and participant quotes into finding themes.
After testing, the work shifted from watching individual sessions to finding the patterns beneath them. Our team reviewed recordings, notes, task outcomes, ease ratings, and participant quotes to understand not just where users struggled, but why the same moments kept repeating.
We grouped observations by task, behavior, and interface area, then traced recurring friction around discoverability, terminology, hierarchy, and feedback. Our synthesis helped turn scattered moments of confusion into clear finding themes and design recommendations.
Chapter 8: Where the Path Became Hard to Follow
Three recurring breakdowns showed where the Quality Management workflow stopped matching what users expected.
After synthesis, the findings began to read like repeated moments in the same story. Users were trying to follow the Quality Management workflow from issue to action to resolution, but clarity kept disappearing at key decision points.
They lost the thread when follow-up actions were hidden, when linked ticket relationships did not explain which record would become primary, and when the Event Reporting form asked for information without enough flexibility or distinction between fields. These were not isolated mistakes. They were patterns that showed where the workflow needed clearer paths, clearer language, and stronger feedback.
Chapter 9: Rebuilding the Path Users Expected
Turning repeated usability breakdowns into clearer actions, labels, and workflow structure.
Our recommendations focused on the moments where users’ expectations and the interface stopped lining up. Instead of proposing a full redesign, we targeted the specific points where repeated behavior showed the workflow needed more clarity.
We recommended making critical follow-up actions easier to find, clarifying how linked records should be merged or related, and adjusting form fields so users could document real-world uncertainty without guessing. Each recommendation was tied back to observed user behavior, so the solutions followed the paths users were already trying to take.
Chapter 10: Handing Over the Map
Delivering a usability report, Figma recommendations, and client presentation to support MedLaunch’s next product decisions.
After turning the findings into recommendation themes, our team packaged the study into a usability report and client presentation. The report documented the study process, participant profiles, task outcomes, limitations, findings, and Figma mockups so MedLaunch could understand not only what users struggled with, but why those moments mattered.
The presentation gave us a chance to walk through the story of the study, answer clarifying questions, and connect each recommendation back to observed user behavior. Rather than handing over a list of fixes, we delivered a research-backed path for making the Quality Management workflow easier to learn, faster to use, and more supportive of users’ everyday work.

