Quality Assurance Protocol

Every call scored.
Every issue surfaced.
Every fix documented.

Our AI-powered quality system reviews every patient call — not the industry-standard 2–5%. Here's exactly what it flags, and what we do about it.

Two women on phone calls — a patient at home and a Brook care navigator at her laptop.

Your patients hear our voice. That means our call quality is your call quality.

When Brook handles enrollment, onboarding, and outreach for your practice, every patient interaction reflects on your care team. Our QA system monitors every call, and if issues arise, we have a protocol in place to address them that same day.

AI-Powered Scoring

Three quality scores for every call. In real time.

Every patient interaction is automatically analyzed by AI across three dimensions — how the rep handled the call, how the patient felt, and whether every required step was completed. Together they cover behavior, experience, and compliance.

Engagement Score

Did the rep handle the conversation well?

0–100

Measures rep behavior on the call — the structural dynamics that decide whether a patient feels heard. Behavior patterns are scored independently of what was actually said.

  • Talk vs. listen balance
  • Interruptions
  • Dead air and long silences
  • Speaking pace
  • Responsiveness to the patient

Sentiment Score

How did the patient feel on the call?

0–100

Measures the patient's emotional state through tone, language, and emotional cues across the full transcript. A sentiment score is not a direct grade on the rep — a patient can arrive frustrated before the rep even speaks.

  • Tone, language, and emotional cues
  • Sentiment shifts over the course of the call
  • Frustration, confusion, and trust signals
  • Read as patient experience, not rep performance
  • Live visibility so reps can adjust in the moment

Automated Evaluation

Did the rep follow the script and process?

PASS/FAIL

Every call is checked against the required script and procedural steps. This is the compliance leg — rules that must be followed on every single call, with no room for interpretation.

  • Required script elements delivered
  • Key phrases said verbatim where required
  • HIPAA and required disclosures completed
  • Copay and consent steps documented
  • Binary pass/fail — no subjective grading

100%

of calls scored by AI

2–5%

industry standard for manual QA review

Source: AmplifAI, 2026

Most call centers manually review a handful of calls per rep per month. Our system scores every interaction automatically — so issues don't go unnoticed.

When any Score Drops Below 80%

A six-step protocol that closes the loop. Every time.

Scoring calls is only useful if you act on what the scores reveal. Here's the escalation protocol that runs automatically when any call falls short.

  1. 1

    Same Day

    Automated alert fires

    The system posts an alert with rep name, score, call link, and failure tags — tone, script deviation, compliance — to the QA channel. No human has to notice the problem. The system catches it.

  2. 2

    Within 24 Hours

    Root cause classified

    Every issue is diagnosed — not just documented. Was this a rep delivery issue? A confusing script? A system problem like a bad list or number reputation? We don't coach a rep for a script problem.

  3. 3

    24–48 Hours

    Documented action plan

    If it's a rep issue: targeted coaching with specific notes on what was wrong and what good looks like, plus increased QA monitoring on their next calls. If it's a script or system issue: the script gets updated or the queue gets adjusted. Every plan includes a measurable improvement indicator.

  4. 4

    48–72 Hours

    Re-evaluate and verify

    QA scores and conversion rates are re-checked. Every case is marked resolved, monitoring, or escalated. No issue sits in an ambiguous state.

  5. 5

    Ongoing

    Escalation to leadership if needed

    If a rep has two or more scores below 80% in a single week, or shows no improvement after coaching, they're removed from the queue or retrained. We don't let underperformance persist on calls to your patients.

  6. 6

    Weekly

    Weekly QA summary shared with leadership

    Total alerts, top issue categories, repeat offenders, systemic patterns — and the most important section: "What We Changed This Week Based on QA." Scripts updated. Queues adjusted. Reps retrained. Documented proof that the system is learning.

Diagnose Before Acting

We classify the root cause before we take action.

Every flagged call is categorized into one of three root causes. This prevents misdiagnosis — and ensures the fix actually addresses the problem.

A

Rep-Specific

Delivery issue, tone or clarity problem, missed steps in the call flow. Fix: targeted coaching, increased monitoring on next 10 calls.

B

Script / Messaging

Confusing explanation, poor wording, high "scam perception" from the patient. Fix: script updated, wording refined, tested on next cohort.

C

System / Operational

Wrong queue, bad list, number reputation issue. Fix: queue adjusted, campaign paused if needed, list source corrected.

How We Think About Quality

Four operating principles behind every scored call.

  1. 01

    Visibility over comfort

    If it's not documented, it didn't happen. Every call, every score, every action is tracked and visible.

  2. 02

    Speed matters

    Detection happens same day. Action within 48 hours. A slow QA loop is almost as bad as no QA loop.

  3. 03

    Diagnose before acting

    We don't coach a rep for a script problem. Root cause classification prevents wasted effort and ensures the right fix.

  4. 04

    Close the loop

    Every issue is tracked, has a documented action, and must show measurable improvement. Open issues don't stay open.

Examples

Here's what we actually flag.

Anonymized reviews from our QA dashboard. Each one triggered coaching, a script update, or a queue adjustment — usually within 48 hours.

Script Drift · Automated Evaluation

Rep over-promised eligibility

Rep: "You'll definitely be approved for this — I'll get you set up right now."

Script requires: "Let's check your Medicare eligibility first."
What QA Caught
Compliance risk — implied guarantee before eligibility check.
Root Cause
A — Rep deviated to build rapport.
Action Taken
1:1 coaching on regulatory risk + next 10 calls monitored.

Sentiment Drop · Sentiment Score

"Is this one of those scam calls?"

Patient (2:45): "Wait — is this one of those scam calls?"

Sentiment score dropped 64 → 28 in a single exchange.
What QA Caught
Rep listed features too fast before grounding in the partner referral.
Root Cause
B — Script intro pacing; affected 4 other reps.
Action Taken
Script rewritten to lead with referral. Scam triggers down 80%.

Missed Disclosure · Automated Evaluation

Cost question deflected

Patient: "Will this cost me anything?"

Rep: "Let me get you enrolled and we'll figure that out later."

What QA Caught
Required cost disclosure skipped. Trust erosion risk.
Root Cause
A — Rep prioritized momentum over disclosure.
Action Taken
Patient called back within 4 hours with correct copay info before consent.

Low Engagement · Engagement Score

Rep talked 78% of the call

Engagement 38 / 100

6-min call. Patient gave one-word answers. Discovery questions skipped.

What QA Caught
Talk-time imbalance + skipped discovery flow.
Root Cause
A — Defaulted to pitch mode under time pressure.
Action Taken
Role-play coaching on discovery. Engagement rose to 67 avg.

Dismissive Tone · Sentiment + Engagement

Missed empathy cue

Patient: "I'm nervous about getting confused by the device."

Rep: "Oh it's super easy, don't worry about it."

Patient went quiet; sentiment dipped; no re-engagement.
What QA Caught
Dismissive tone + missed empathy cue on stated concern.
Root Cause
A + B — Rep reflex + script had no device-anxiety response.
Action Taken
Coaching on empathy cues. Script updated with device-anxiety response.

Want to see how this runs on your calls?

Review a real flagged call, walk through the QA dashboard, or talk through the escalation protocol with your Partner Success Manager.