
January 6, 2026
Utah Just Made It Legal
for AI to Prescribe Medicine
The first state to allow AI to issue routine medication refills. Here's the behavioral data that made it inevitable.
Utah and Doctronic have launched the first state-approved program in the country that allows an AI system to legally participate in medical decision-making for prescription renewals.
This didn't happen in a vacuum. For the past two years, something else was happening while health systems debated AI strategy, formed committees, and commissioned reports.
40 million Americans started using ChatGPT for healthcare. Daily.
The Behavior That Made It Inevitable
Three in five Americans say the current healthcare system is broken. Strong majorities say hospital costs (87%), poor healthcare access (77%), and lack of nurses (75%) are all serious problems. Americans give the system a C+ on access and a D+ on costs.
They weren't waiting for permission. They were using AI to get information when they first felt unwell, to prepare for visits with clinicians, to comprehend patient instructions, and to deal with the administrative aftermath of billing, claims, and denials.
The Stories Behind the Data
Ayrin Santoso is a tax professional in San Francisco. When her mother in Indonesia suffered sudden vision loss, her family attributed it to fatigue. From 8,000 miles away, Santoso entered symptoms, prior advice, and context into ChatGPT.
She received a clear warning: her mother's condition could signal a hypertensive crisis and possible stroke.
Based on this information, her mother immediately measured her blood pressure, which confirmed Santoso's fears. She sought hospital admission that day. During hospitalization, ChatGPT helped translate prevention guidance into actionable steps: home monitoring routines, lifestyle adjustments. All confirmed by a doctor in Indonesia.
Santoso's mother has since recovered 95% of her vision in the affected eye.
What People Actually Do
They decode insurance policies at midnight. They prepare questions before appointments so they don't forget what to ask. They translate what their doctor said into language they can understand.
They navigate prior authorizations. They compare providers. They figure out what "covered at 80% after deductible" actually means for their family.
Rich Kaplan has a rare auto-immune clotting disease. When his insurance denied a clinician-recommended therapy, he used ChatGPT to find studies, trials, and case reports. He produced a cited literature review that supported an appeal and won approval through arbitration.
Now ChatGPT helps him with daily management between appointments: summarizing visit notes, extracting lab trends, generating question lists for specialists. By reviewing his medication list and new over-the-counter plans, it flagged kidney risks and interaction concerns. This helped prevent avoidable complications.
People aren't waiting for permission. They're going around the systems that were supposed to help them.
In rural communities where the nearest specialist is hours away, AI becomes the only thing that responds.
Family physician Dr. Margie Albers uses AI to serve everyone in her community. An AI scribe drafts visit notes within her clinical workflow, reducing manual data entry, medical coding, and billing work. That time savings lets her focus on patients who often travel hours for appointments and arrive with serious problems.
The problem isn't technology.
The problem is design.
The Mismatch
Most healthcare software was designed for institutions. Built around billing cycles, compliance requirements, and administrative workflows. Not around human stress and confusion.
Portals are hard to use. Information is siloed. Communication channels are slow. The experience assumes people have time, patience, and expertise they don't have.
Patients responded the only way they could: by going around these systems entirely.
This is not a failure of user training. It's a signal about what healthcare products should have been doing all along.
Healthcare products were built for the institution's workflow. Not for the patient's moment of need.
What This Means
The question for anyone building in healthcare is no longer whether to incorporate AI. The question is how to do it well.
How to maintain trust while reducing friction. How to support clinical judgment without replacing it. How to meet people in their moments of confusion and stress with useful, calm assistance.
We believe:
- 1AI should be invisible. The best AI doesn't announce itself. It makes things work better.
- 2AI should reduce cognitive load. It should absorb complexity, not add to it. Calm over clever.
- 3AI should support judgment, not override it. Clinical expertise is earned. AI should augment it.
- 4AI should work under real constraints. Stress, regulation, unreliable connectivity. Graceful failure matters.
What We Build
Healthcare Navigation
Systems that help patients find care and move through complex journeys with less friction.
Patient & Caregiver Tools
Applications that support self-management, preparation, and ongoing engagement.
Provider Support
Platforms that reduce administrative burden and help clinicians focus on care.
Education & Simulation
Adaptive learning experiences for clinical professionals.
Are your products designed for how people actually behave?
Or how you wish they would?
Share this perspective
Data and case studies from AI as a Healthcare Ally: How Americans are navigating the system with ChatGPT, OpenAI, January 2026. The stories of Ayrin Santoso, Rich Kaplan, and Dr. Margie Albers are documented in the original report.
More Insights
Perspectives on product design, technology, and the forces shaping how people interact with the things we build.
The integration of AI into healthcare is not a future event. The organizations that build thoughtfully for this moment will define how healthcare works for the next decade.
