Case study

Guided Scheduling: Finding Care When You Don't Know Where to Start

Most patients know they're in pain. Few know which specialist treats it. This study tested a guided, filter-based provider search prototype designed to help patients navigate to the right care without needing to know the clinical terminology first. The core question was whether a structured, step-by-step filtering experience could meet patients where their mental model actually starts: at the symptom, not the specialty.

Participants
100
Study type
Concept validation + usability testing
Methods
Prototype task walkthrough via Maze · Think-aloud · Post-task survey · Open-text feedback
Decision area
Guided search design, taxonomy and terminology, multi-concern selection, and filtering flexibility.
Questions explored
Do patients understand and trust the guided steps? Can they distinguish between surgical and non-surgical providers? Do they want to select multiple concerns at once? Where does the taxonomy break down?

tl;dr

  • 90% of participants wanted the ability to select multiple concerns simultaneously. Health problems rarely appear in isolation, and patients expect the system to reflect that.
  • Patients think in symptoms, not specialties. If "ankle" isn't in the list, they want to type it. Rigid taxonomy creates dead ends before the search even begins.
  • The guided flow concept was validated. Patients responded positively to being helped through provider selection rather than left to navigate it alone.
  • Terminology and category labels were a consistent friction point. Clinical language that providers use internally doesn't map to how patients describe their own conditions.
  • Free-text input was strongly and repeatedly requested. Patients want a fallback when structured filters don't match their situation.

What We Learned

Patients Think in Symptoms. The Taxonomy Was Built for Clinicians.

The filtering system organized provider options by specialty and clinical category. That structure made sense from an operational standpoint. It created friction from a patient one. Participants consistently described wanting to search the way they naturally think about their own bodies: by symptom, body part, or plain-language description of what was wrong, not by specialty name.

Thirty participants explicitly asked for more specialty options in the list. Twenty-two asked why they couldn't simply type "ankle" and surface relevant providers. Twenty more asked for a free-text input option as a fallback. These weren't isolated complaints. They were variations of the same underlying insight: the taxonomy assumed a level of clinical fluency that most patients don't have and shouldn't need to have.

Patient-language requests
Three patterns of taxonomy friction
30
Asked for more specialty options
22
Wanted to type "ankle" and search
20
Requested free-text fallback
Terminology and category structure were consistent friction points across both study groups. The same underlying insight surfaced in three forms.
Multi-concern preference
Patients want to select more than one thing at a time
45 / 50
GYN cohort
90%
Combined
45 of 50 GYN participants and the majority of Ortho participants wanted multi-concern selection. Health problems rarely arrive in isolation.

Multi-Concern Selection Isn't a Nice-to-Have. Patients Expect It.

Forty-five of fifty participants in the GYN study said they wanted the ability to select multiple concerns when searching for a provider. The pattern was equally strong in the Ortho group. This wasn't a preference. It was a reflection of how patients actually experience their health. Symptoms overlap. Conditions co-exist. Patients arrive at appointments with more than one thing to address, and they want the system they use to find care to reflect that reality.

Participants described the single-selection model as creating extra work and reducing trust in the results. If they could only select one concern, they weren't sure the provider they found would be equipped to address everything they needed. Several asked whether the information they entered during search would be visible to the provider before their appointment, suggesting that the guided flow was already being understood as more than a search tool. It was being seen as a communication channel.

Three Patterns That Surfaced Across the Study

The Guided Concept Was Validated, With Caveats

Patients responded positively to the idea of being guided through provider selection rather than left to navigate a list on their own. The concept of structured filtering, especially for patients who don't know what specialty they need, was seen as helpful and appropriate. The problems weren't with the concept. They were with the execution: labels that didn't match patient language, categories that were too narrow, and a lack of flexibility for patients whose situations didn't fit neatly into the options provided.

Visual Support Could Reduce Cognitive Load

A small but notable group of participants asked for images or icons alongside category labels to help them navigate the filters. Body maps, visual representations of anatomy that patients could interact with to identify their area of concern, were suggested unprompted. This points toward a potential enhancement that could reduce the reliance on clinical terminology entirely by letting patients point to what hurts rather than name it.

Navigation and Flow Control Mattered More Than Expected

Twenty-four participants noted navigation issues, most commonly the absence of a clear back button or the inability to revise their selections without starting over. In a guided flow built around sequential decisions, the ability to course-correct matters. Patients who felt locked into a path they'd started down became frustrated quickly. Flexibility in navigation isn't just a usability improvement. It's a trust signal.

Outcome of the Research

The study validated the core concept of guided scheduling while identifying the specific design and taxonomy decisions that needed to change before the experience would work in production. Patients want to be helped to find care. They don't want to translate their symptoms into clinical language before they can start. The research gave the product team clear, participant-backed evidence for four specific changes: validating multi-concern selection, revisiting the taxonomy with plain-language patient terms, adding a free-text input option, and exploring visual support like icons or body maps.

The study also surfaced a more expansive opportunity. Patients were already beginning to treat the guided flow as a communication channel, a way to prepare their provider for the conversation ahead. That mental model, if deliberately supported in the design, could make the experience meaningfully more valuable than a simple search tool.

Implications
  • Build the taxonomy around patient language, not clinical structure. If patients can't recognize their condition in the list, the guided experience fails before it starts.
  • Multi-concern selection is validated and expected. Design for it as a default, not an edge case.
  • Consider the guided flow as a pre-visit communication tool, not just a search filter. Patients are already thinking that way.

The strongest signal from this study wasn't any single finding. It was the consistency of the underlying pattern. Patients across both study groups, across age ranges, across different care needs, kept arriving at the same frustration: the system was asking them to meet it on its terms rather than the other way around. The research made that visible in a way that's hard to argue with, and gave the team a concrete foundation for building something more patient-centered.

Continue Exploring

Contact

Want to talk through the guided scheduling study?

The taxonomy gap, the multi-concern finding, or the pre-visit communication framing: happy to get into any of it.