Case study

Why Patients Call

AdventHealth’s telephony support team was seeing sustained call volume tied to patients struggling to access the AdventHealth app. I went directly into the Five9 call database, reviewed transcripts, and rebuilt the categorization system from scratch because the existing agent-applied tags were too inconsistent and too broad to surface what was actually happening.

Lifecycle stages
5
Study type
Operational analysis + systems synthesis
Methods
Five9 call-summary review, transcript spot checks, taxonomy rebuild, and access lifecycle mapping
Decision area
Why support demand remained high and which access failures were actually driving those calls
Questions explored
What patients were really calling about, where identity and security failures were surfacing, and how those failures compounded until only human support could resolve them

tl;dr

  • ”Can’t log in” turned out to be a symptom, not a root cause.
  • Re-categorizing the call database was necessary because the original call tagging system was too inconsistent to be actionable.
  • Access failures compounded over time: retries, resets, and new code requests often made the situation worse.
  • Security requirements were context-blind and created unnecessary friction for low-stakes tasks.
  • Support staff had become the system’s interpreters and repair crew because the product could not explain or resolve its own failures.

What We Learned

The Data Had a Tagging Problem Before It Had an Insight Problem

When I entered the Five9 database, the call records were tagged by customer service agents using a category system that was inconsistent, overlapping, and too broad to analyze meaningfully. Many calls that were fundamentally about the same issue were filed under different labels. Before any pattern analysis was possible, I re-coded the call records against the actual transcripts to create a consistent, accurate taxonomy.

That step was not part of the original ask, but without it the analysis would have produced noise, not findings. The first real decision in the work was to stop trusting the raw categories and start over.

Access outcomes
What the call data appeared to show

Normalized access outcomes made it easier to see that verification, recovery, and vague “can’t log in” complaints were all part of the same larger pattern.

Access blocked — cause unclear 44.6%
Access blocked — recovery failed 24.4%
Access blocked — verification / MFA 21.4%
Access blocked — account not ready 5.3%
Access blocked — security lockout 4.3%
Roughly a third of calls explicitly referenced verification or MFA failures. The largest single category, “cause unclear,” masked a substantial portion of access failures that transcript spot-checking showed shared the same root causes.
Failure lifecycle
The failure had a predictable progression
1

Trigger

Verification, MFA, password reset, or a risk-based check begins.

2

Opaque failure

Codes do not arrive, contact methods are wrong, or records do not line up — without a clear explanation.

3

Patient pain

The experience collapses into “I can’t log in” or “my account is not working.”

4

Compounding failure

Patients retry, reset, and request more codes, often causing lockouts or tougher security checks.

5

Human repair

By this stage only an agent can interpret system state and restore access.

Each stage of the access failure lifecycle is a byproduct of the one before it. By the time a patient calls support, the original issue has usually been compounded by their own recovery attempts.

There Was One Problem, Not Many

Once the data was accurately categorized, the pattern was clear. Calls were being driven by patients hitting the same set of hidden dependencies at different points in their access journey, and receiving no useful feedback from the system when those dependencies failed.

The five most common hidden failure drivers were inaccessible email accounts, unreachable phone numbers or wrong SMS delivery targets, outdated or incorrect contact information, incomplete patient record linkage, and security cooldown timers triggered by repeated failed attempts. When any one of these broke down, the system responded with a generic error. Patients had no way to understand what was actually wrong or what would actually fix it.

That meant the apparent variety in the call reasons was misleading. These were different symptoms of the same access-and-identity system failing patients at different points in the journey.

Security Requirements Didn’t Match the Sensitivity of What Patients Needed

The same authentication requirements applied regardless of what a patient was actually trying to do. A patient checking an appointment reminder faced the same security barriers as a patient accessing detailed clinical records.

This mismatch between security rigor and data sensitivity created unnecessary friction for low-stakes interactions and increased the probability of failure for patients who were already less digitally confident.

What support had become

Agents were not helping because they had better product tools. They were helping because the product had no other way to explain system state, repair broken dependencies, or safely resolve identity ambiguity.

Support Had Become the System’s Repair Crew

By the time a call reached a human agent, the situation was almost always beyond self-service recovery — retry limits exhausted, identity ambiguity introduced, and automated systems unable to proceed safely.

Outcome of the Research

Recommendations
  • Evaluate where multiple security “doors” stack in a single journey without meaningful added risk reduction.
  • Explore a tiered authentication model that matches friction to the sensitivity of the data being accessed.
  • Make access state visible so patients understand why they are blocked and what resolves it.
  • Reduce retries and resets that compound failure when the real issue is elsewhere.
  • Align Product, Design, and Security on one shared access failure lifecycle.

The dependency-level recommendations from this analysis fed forward into a follow-on prototype study, which tested whether a UI intervention could reduce call volume, and found the real issue was upstream of the interface.

Continue Exploring

Contact

Want to talk through the Five9 work?

The taxonomy rebuild, the access-failure lifecycle, or where Product, Design, and Security could intervene at each stage: happy to get into any of it.