Phase 5: Provider Portal¶
Detailed implementation plan for the provider-facing portal — the clinical feedback loop that closes the AI improvement cycle.
Why This Matters¶
Without provider feedback, the AI improves only from patient behavior signals (match acceptance, satisfaction surveys). With it, clinical experts correct the AI directly — wrong ICD codes get fixed, missed conditions get added, and those corrections flow back into better prompts. This is the difference between a system that guesses and one that learns from doctors.
graph LR
A[Patient Uploads<br/>Medical Report] --> B[AI Extracts<br/>Clinical Data]
B --> C[EHR Built &<br/>Matches Generated]
C --> D[Records Forwarded<br/>to Provider]
D --> E[Provider Reviews<br/>& Corrects EHR]
E --> F[Corrections Feed<br/>Back to AI]
F --> B
style E fill:#FF7F50,color:#fff
style F fill:#008B8B,color:#fff
What Already Exists (Phases 1-4)¶
| Component | Status | Location |
|---|---|---|
| Provider feedback endpoint | Built | POST /api/v1/cases/{id}/provider-feedback |
| Feedback records table | Built | feedback_records with correction lifecycle |
| Correction pattern detector | Built | feedback_service.get_correction_patterns() |
| Eval pipeline | Built | Nightly extraction accuracy eval |
| Weight optimizer | Built | Monthly matching weight adjustment |
| Provider model | Built | providers table (42 rows) |
| Doctor model | Built | doctors table (8 rows) with procedures |
What's missing: Provider authentication, provider-facing UI, EHR review workflow, and the correction-to-prompt automation pipeline.
Architecture¶
System Boundaries¶
graph TD
subgraph "Patient App (app.curaway.ai)"
PA[Patient Chat]
PU[Document Upload]
PM[Match Results]
end
subgraph "Provider Portal (providers.curaway.ai)"
PL[Provider Login]
PD[Dashboard]
CR[Case Review]
EHR[EHR Correction]
RS[Response Actions]
end
subgraph "Backend (services.curaway.ai)"
API[FastAPI API]
FB[Feedback Service]
EV[Eval Pipeline]
end
PA --> API
PU --> API
PM --> API
PL --> API
PD --> API
CR --> API
EHR --> FB
RS --> API
FB --> EV
style PL fill:#008B8B,color:#fff
style CR fill:#008B8B,color:#fff
style EHR fill:#FF7F50,color:#fff
Deployment¶
| Component | Platform | Domain |
|---|---|---|
| Provider Portal Frontend | Vercel (separate project) | providers.curaway.ai |
| Backend API | Railway (existing) | services.curaway.ai |
| Auth | Clerk (existing, new org type) | — |
Part A: Provider Authentication & Authorization¶
A1. Clerk Organization Setup¶
Create a new Clerk organization type for providers. Each provider (hospital) is a Clerk Organization with members (doctors, admins, coordinators).
| Role | Permissions | Who |
|---|---|---|
provider_admin |
View all cases, manage doctors, respond to cases, view analytics | Hospital admin |
provider_doctor |
View assigned cases, review EHR, submit corrections, respond | Individual doctor |
provider_coordinator |
View cases, respond, schedule, but NOT review EHR clinically | Patient coordinator |
A2. Backend Auth Changes¶
# New middleware: extract provider context from Clerk JWT
# app/middleware/provider_auth.py
async def get_provider_context(request: Request) -> ProviderContext:
"""Extract provider_id, doctor_id, role from Clerk JWT claims."""
claims = verify_clerk_jwt(request.headers.get("Authorization"))
org_id = claims.get("org_id") # Clerk organization = provider
role = claims.get("org_role") # provider_admin, provider_doctor, provider_coordinator
# Map Clerk org_id to our provider_id
provider = await get_provider_by_clerk_org(db, org_id)
return ProviderContext(
provider_id=provider.id,
provider_name=provider.name,
doctor_id=claims.get("doctor_id"), # custom claim for doctor users
role=role,
tenant_id=provider.tenant_id,
)
A3. New API Endpoints (Provider-Scoped)¶
| Method | Path | Role | Description |
|---|---|---|---|
| GET | /api/v1/provider-portal/cases |
All | List cases forwarded to this provider |
| GET | /api/v1/provider-portal/cases/{id} |
All | Case detail with EHR, documents, patient summary |
| GET | /api/v1/provider-portal/cases/{id}/ehr |
Doctor+ | Full EHR for clinical review |
| GET | /api/v1/provider-portal/cases/{id}/documents |
All | View uploaded documents (presigned URLs) |
| POST | /api/v1/provider-portal/cases/{id}/respond |
All | Acknowledge, schedule, decline |
| POST | /api/v1/provider-portal/cases/{id}/ehr-review |
Doctor | Submit EHR corrections |
| GET | /api/v1/provider-portal/dashboard |
Admin | Analytics: cases received, response rate, demographics |
| GET | /api/v1/provider-portal/doctors |
Admin | List doctors at this provider |
| PATCH | /api/v1/provider-portal/doctors/{id} |
Admin | Update doctor profile |
Effort: 8-10h (router + middleware + Clerk setup)
Part B: Provider Dashboard¶
B1. Dashboard View¶
The provider admin sees an overview of their incoming cases.
┌──────────────────────────────────────────────────────────┐
│ Apollo Hospitals Chennai — Provider Dashboard │
│ ─────────────────────────────────────────────────────── │
│ │
│ Cases This Month: 12 │ Response Rate: 91% │
│ Avg Response Time: 4.2h │ Satisfaction: 4.6/5.0 │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Pending Cases (3) │ │
│ │ │ │
│ │ CRW-2026-00015 · TKR · Aisha A. · 2h ago │ │
│ │ CRW-2026-00014 · CABG · Mohammed K. · 5h ago │ │
│ │ CRW-2026-00013 · TKR · Sarah L. · 1d ago │ │
│ └────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Responded (9) │ │
│ │ Scheduled: 6 │ Acknowledged: 2 │ Declined: 1 │ │
│ └────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────┘
B2. Data Requirements¶
Query from existing tables:
- cases WHERE status = 'forwarded' AND provider in selected_providers
- events WHERE event_type IN ('provider.acknowledged', 'provider.scheduled', 'provider.declined')
- feedback_records for correction activity
- doctors + doctor_procedures for staff listing
Effort: 6-8h (backend queries + frontend dashboard)
Part C: Case Review & EHR Correction¶
This is the core clinical feedback feature.
C1. Case Review Flow¶
stateDiagram-v2
[*] --> Received: Case forwarded
Received --> UnderReview: Provider opens case
UnderReview --> EHRReviewed: Doctor reviews EHR
EHRReviewed --> Corrected: Corrections submitted
EHRReviewed --> Confirmed: No corrections needed
Corrected --> Scheduled: Appointment booked
Confirmed --> Scheduled: Appointment booked
Scheduled --> [*]
UnderReview --> Declined: Provider declines
Declined --> [*]
C2. EHR Review Interface¶
The doctor sees the AI-generated EHR side-by-side with the uploaded documents.
┌─────────────────────────┬───────────────────────────────┐
│ AI-Generated EHR │ Source Documents │
│ ────────────────────── │ ──────────────────────────── │
│ │ │
│ Conditions: │ [PDF Viewer] │
│ ✅ M17.11 OA Right Knee│ blood_work_feb2024.pdf │
│ ⚠️ K76.0 Fatty Liver │ │
│ ⚠️ R00.1 Bradycardia │ Page 1 of 3 │
│ ➕ Add condition... │ ┌───────────────────────┐ │
│ │ │ │ │
│ Lab Values: │ │ [PDF content] │ │
│ Hemoglobin: 13.5 g/dL │ │ │ │
│ HbA1c: 5.8% [Edit] │ │ │ │
│ eGFR: 88 [Edit] │ └───────────────────────┘ │
│ Creatinine: 0.9 [Edit]│ │
│ │ Uploaded: 2026-03-31 │
│ Comorbidities: │ OCR Confidence: 94% │
│ • Fatty liver (K76.0) │ │
│ • Bradycardia (R00.1) │ │
│ ➕ Add comorbidity... │ │
│ │ │
│ [Confirm Accurate] │ │
│ [Submit Corrections] │ │
└─────────────────────────┴───────────────────────────────┘
C3. Correction Actions¶
For each EHR field, the doctor can:
| Action | Example | What Happens |
|---|---|---|
| Confirm | ✅ Click checkmark next to condition | Marked as clinically confirmed |
| Edit | Change HbA1c from 5.8 to 6.2 | Creates correction record with old/new value |
| Add | Add "E11.9 Type 2 Diabetes" | Creates conditions_added correction |
| Remove | Remove false-positive condition | Creates conditions_removed correction |
| Flag | Flag observation as clinically significant | Adds clinical note for Curaway team |
C4. Correction Submission¶
# What the provider submits
{
"reviewer": "Dr. Rajesh Patel",
"reviewer_doctor_id": "uuid",
"review_time_minutes": 8,
"conditions_confirmed": ["M17.11"], # AI got these right
"conditions_added": [
{"icd10": "E11.9", "name": "Type 2 Diabetes Mellitus", "note": "HbA1c trending upward, 6.2 not 5.8"}
],
"conditions_removed": [],
"observations_corrected": [
{"parameter": "HbA1c", "ai_value": 5.8, "correct_value": 6.2, "note": "Misread from report"}
],
"overall_accuracy_rating": 4, # 1-5 stars
"clinical_notes": "Good extraction overall. Missed pre-diabetic indication.",
"response_action": "scheduled", # acknowledged | scheduled | declined
"scheduled_date": "2026-04-15T10:00:00Z",
"assigned_doctor_id": "uuid",
}
C5. Backend Processing¶
When a correction is submitted:
- Store outcome event via
outcome_tracker.record_provider_clinical_feedback() - Create feedback record via
feedback_service.create_feedback_record()with: feedback_type = "extraction_accuracy"ai_output= the AI-generated EHRground_truth= the corrected EHRcorrection_type= "condition_missed" / "observation_wrong" / "condition_false_positive"- Update case status to
provider_reviewing→scheduledordeclined - Notify patient via Resend email: "Your provider has reviewed your records and scheduled an appointment"
- Feed into eval pipeline — nightly extraction eval now has ground truth data
Effort: 8-12h (backend + frontend)
Part D: Correction-to-Prompt Pipeline¶
This is the automated feedback loop that makes the AI smarter over time.
D1. Pattern Detection (Already Built)¶
# Runs daily at 4am via QStash
patterns = await get_correction_patterns(db, tenant_id, min_occurrences=3)
# Example output:
# [
# {"correction_type": "condition_missed", "occurrences": 7,
# "details": [{"icd10": "E11.9", "context": "HbA1c 6.0-6.4 range"}]},
# {"correction_type": "observation_wrong", "occurrences": 4,
# "details": [{"parameter": "HbA1c", "typical_error": "off by 0.2-0.5"}]},
# ]
D2. Few-Shot Example Generation¶
When a pattern reaches threshold (3+ occurrences), auto-generate a new few-shot example:
# app/services/prompt_improver.py
async def generate_correction_example(pattern: dict) -> dict:
"""Generate a few-shot example from a recurring correction pattern."""
if pattern["correction_type"] == "condition_missed":
# Build an example showing the correct extraction
return {
"input": "Patient lab values: HbA1c 6.2%, fasting glucose 118 mg/dL",
"expected_output": {
"conditions": [
{"name": "Pre-diabetes", "icd10": "R73.03", "confidence": 0.85},
{"name": "Type 2 Diabetes Mellitus", "icd10": "E11.9", "confidence": 0.70,
"note": "HbA1c >= 6.0 warrants diabetes screening"}
]
},
"explanation": "HbA1c in 6.0-6.4 range indicates pre-diabetic state. "
"Combined with elevated fasting glucose, Type 2 DM should be flagged.",
"source": f"Provider corrections (n={pattern['occurrences']})",
}
elif pattern["correction_type"] == "observation_wrong":
return {
"input": "Lab report shows HbA1c: 6.2%",
"expected_output": {"value": 6.2, "unit": "%"},
"explanation": "Ensure exact value extraction — do not round or approximate.",
"source": f"Provider corrections (n={pattern['occurrences']})",
}
D3. Prompt Update Workflow¶
graph TD
A[Pattern Detected<br/>3+ corrections] --> B[Generate<br/>Few-Shot Example]
B --> C[Stage in Langfuse<br/>as Draft Prompt]
C --> D{Admin Review}
D -->|Approve| E[A/B Test:<br/>Old vs New Prompt]
D -->|Reject| F[Archive]
E --> G{New Prompt<br/>Better?}
G -->|Yes| H[Promote to<br/>Production]
G -->|No| I[Revert]
H --> J[Mark Corrections<br/>as Applied]
Steps:
1. Pattern detected by eval/feedback-patterns QStash task
2. Example generated by prompt_improver.generate_correction_example()
3. Staged in Langfuse as a new prompt version (draft)
4. Admin reviews via internal dashboard or Langfuse UI
5. A/B test — Langfuse serves old prompt to 50% of requests, new to 50%
6. Compare via eval/extraction-accuracy — if new prompt has higher F1, promote
7. Mark applied — feedback_service.mark_correction_applied() on all contributing corrections
D4. Langfuse Prompt Management Integration¶
# Stage a new prompt version in Langfuse
from langfuse import Langfuse
langfuse = Langfuse()
# Add the correction-derived example to the prompt
current_prompt = langfuse.get_prompt("clinical_context_extraction")
new_examples = current_prompt.config.get("few_shot_examples", [])
new_examples.append(correction_example)
langfuse.create_prompt(
name="clinical_context_extraction",
prompt=current_prompt.prompt,
config={
**current_prompt.config,
"few_shot_examples": new_examples,
"version_note": f"Added {len(new_examples)} correction-derived examples",
},
labels=["draft"], # Not yet production
)
Effort: 6-8h (prompt_improver service + Langfuse integration + admin review UI)
Part E: Provider Analytics Dashboard¶
E1. Metrics¶
| Metric | Source | Query |
|---|---|---|
| Cases received (month) | events WHERE event_type = 'provider.notified' |
COUNT by month |
| Response rate | events WHERE event_type LIKE 'provider.%' |
responded / notified |
| Avg response time | provider.responded - provider.notified timestamps |
AVG in hours |
| EHR accuracy rating | feedback_records WHERE feedback_type = 'extraction_accuracy' |
AVG quality_score |
| Top procedures | cases joined with provider_procedures |
COUNT by procedure_code |
| Patient demographics | patients joined with forwarded cases |
country, age, gender distribution |
| Doctor workload | cases assigned to each doctor |
COUNT by doctor_id |
E2. Provider-Specific Insights¶
┌─────────────────────────────────────────────────┐
│ AI Accuracy for Your Cases │
│ ────────────────────────────────────────────── │
│ │
│ Extraction Accuracy: 87% (↑3% from last month) │
│ Conditions correctly identified: 94% │
│ Lab values correctly extracted: 91% │
│ Most common corrections: │
│ 1. HbA1c value misread (4 cases) │
│ 2. Pre-diabetes not flagged (3 cases) │
│ │
│ Your corrections are improving the AI! │
│ 12 corrections applied to prompts this month │
└─────────────────────────────────────────────────┘
Effort: 6-8h (backend aggregation queries + frontend dashboard)
Implementation Plan¶
Sprint 1: Auth + Case Listing (Week 1)¶
| Task | Effort | Deliverable |
|---|---|---|
| Clerk provider org setup | 2h | Organization type, roles, invite flow |
| Provider auth middleware | 3h | JWT validation, provider context extraction |
| Provider-scoped case listing | 3h | GET /provider-portal/cases with filtering |
| Case detail endpoint | 2h | GET /provider-portal/cases/{id} with EHR + docs |
| Sprint 1 total | 10h | Provider can log in and see forwarded cases |
Sprint 2: EHR Review + Corrections (Week 2)¶
| Task | Effort | Deliverable |
|---|---|---|
| EHR review UI (split view: EHR + PDF) | 6h | Side-by-side review interface |
| Correction form (confirm/edit/add/remove) | 4h | Per-field correction actions |
| POST /provider-portal/cases/{id}/ehr-review | 3h | Backend processing + feedback record creation |
| Response actions (acknowledge/schedule/decline) | 2h | POST /provider-portal/cases/{id}/respond |
| Patient notification on provider response | 1h | Resend email trigger |
| Sprint 2 total | 16h | Full review + correction workflow |
Sprint 3: Correction Pipeline + Dashboard (Week 3)¶
| Task | Effort | Deliverable |
|---|---|---|
| prompt_improver service | 4h | Generate few-shot examples from patterns |
| Langfuse prompt staging | 3h | Stage draft prompts, A/B test setup |
| Admin approval UI | 3h | Review staged prompts, approve/reject |
| Provider analytics dashboard | 6h | Cases, response rate, AI accuracy, workload |
| Provider-specific insights | 2h | Accuracy trends, correction impact |
| Sprint 3 total | 18h | Automated learning loop + provider analytics |
Sprint 4: Polish + Testing (Week 4)¶
| Task | Effort | Deliverable |
|---|---|---|
| E2E tests for provider flow | 4h | Login → review → correct → verify feedback loop |
| RBAC enforcement tests | 2h | Doctor can review, coordinator cannot |
| Load testing (many corrections) | 2h | Ensure eval pipeline handles volume |
| Documentation | 2h | Update living docs, add ADR for provider portal |
| Security audit | 2h | Verify no patient PII leaks to wrong provider |
| Sprint 4 total | 12h | Production-ready |
Total Effort¶
| Sprint | Scope | Effort |
|---|---|---|
| Sprint 1 | Auth + Case Listing | 10h |
| Sprint 2 | EHR Review + Corrections | 16h |
| Sprint 3 | Correction Pipeline + Dashboard | 18h |
| Sprint 4 | Polish + Testing | 12h |
| Total | 56h |
In calendar time¶
| Working model | Duration |
|---|---|
| Solo (SD + Claude Code, 4h/day) | 3.5 weeks |
| Solo (SD + Claude Code, 8h/day) | 1.5 weeks |
| 2-person team | 1 week |
| 3-person Hyderabad team | 4-5 days |
Dependencies¶
| Dependency | Required For | Status |
|---|---|---|
| Provider partnership signed | Sprint 1 (real Clerk org) | Pending business |
| Clerk Organizations plan | Sprint 1 (multi-org) | Free tier supports it |
| Provider email addresses | Sprint 1 (invitations) | Need from partners |
| CORS for providers.curaway.ai | Sprint 1 (API access) | Add to CORS list |
| Vercel project for portal | Sprint 2 (frontend deploy) | Create when ready |
Security Considerations¶
- Provider isolation — Provider can ONLY see cases forwarded to them, never other providers' cases
- Patient PII — Provider sees patient name + medical data ONLY after patient consents to forwarding
- Doctor assignment — EHR corrections linked to specific doctor for accountability
- Audit trail — Every provider action logged in events table (who viewed what, when)
- Correction integrity — Original AI extraction preserved; corrections stored separately (append-only)
- GDPR — If patient revokes consent, provider access to case is revoked immediately
Feature Flags¶
| Flag | Default | Description |
|---|---|---|
provider_portal_enabled |
OFF | Master switch for provider portal |
provider_ehr_review |
OFF | Allow doctors to submit EHR corrections |
provider_auto_prompt_staging |
OFF | Auto-stage prompt updates from correction patterns |
provider_analytics_dashboard |
OFF | Show analytics to provider admins |