Measuring progress in steady splits: Agentic AI for time to care
September 23, 2025 | Utkarsh Tripathi
I’m standing at the starting line of another 10K, breath visible in the crisp 5 a.m. air. My shoes are laced tight, my heart steady but eager. The miles ahead aren’t about speed or competition — but the journey. Each step is a commitment to keep going, to adjust my stride over Pittsburgh’s uneven trails, to breathe through the ache, and to find joy in the rhythm of progress. I cue the playlist, check my smartwatch, settle into pace, and stay mindful of the path, knowing every moment builds toward something meaningful.
Healthcare feels like this journey — not a race, but a deeply human endeavor driven by curiosity, compassion, and craft to nurture, heal, and improve lives. It’s a landscape of complex needs, from patient care to operational demands, where every step matters and every decision has weight. This is where technology becomes a steadfast partner, helping navigate winding paths with autonomy and accuracy — bringing clinicians, administrators and patients together to pursue better care.
Artificial Intelligence (AI) isn’t new — the term was coined ~70 years ago by John McCarthy — but its capabilities and impact have accelerated dramatically. It is also imperative to understand that AI shouldn’t be treated as an inscrutable black box; we apply transparency, monitoring and human oversight so it extends judgment rather than obscuring it. At the end of the day, it’s math all the way down: gradients, optimizers, and a massive number of matrix multiplications that power modern AI. But math doesn’t replace meaning. AI augments expertise: It never replaces the human presence at the bedside, the ethical judgment in ambiguous moments, or the trust built in conversation. Technology is meant to serve the clinical craft, not supplant it.
Traditional or classical/statistical machine learning primarily classifies and predicts. It supports decisions, flags risk and summarizes information for clinicians and administrators to act on. Modern AI goes further; it plans multi-step workflows and takes bounded actions across electronic health records (EHRs), scheduling, messaging and monitoring systems under clear guardrails. Put simply, traditional AI informs, whereas modern AI, which includes generative AI (genAI) and agentic AI, coordinates and acts on our behalf.
Under the hood, models learn by tuning parameters that map inputs to outputs, optimizing them so predictions get more accurate over time. Scale up the parameters and training data, apply more sophisticated architectures, and the system becomes capable of modeling complex patterns. If the objective is to predict the next token given context, we’re in the realm of large language models (and more broadly, genAI). For large language models, the pretraining objective is to predict the next token given context, which enables generative behavior. But the jump from GenAI to agents is not straightforward. Moving from genAI to reliable agents requires tool integrations, planning, identity and access controls, error handling, and safety layers — it’s achievable but not trivial.
Foundation models weren’t trained on an organization’s private systems or procedures, so they need tools – secure APIs, knowledge bases and integrations — plus policies and feedback loops to perceive, plan and act safely in the real world.
As roles evolve with these capabilities, the next frontier is connected multi-agent care systems in which domain-specific agents collaborate under governance and work side-by-side with healthcare professionals. Medical practitioners focus on judgment and relationship-centered tasks while agents handle repeatable steps like eligibility checks, order tracking, documentation drafts and follow ups. Administrators, revenue cycle teams and operations leaders oversee throughput, quality and cost. IT/clinical informatics teams manage integrations, guardrails and monitoring. Instead of siloed point solutions, collaborating agents — intake, eligibility, triage, clinician assistant, referral, revenue cycle and follow up — hand off context and close loops such as results follow up, referrals, prior authorization and care gaps with auditable trails. With clear scopes, agents act within bounded autonomy, while humans retain authority over ethics‑laden or ambiguous decisions. The result is orchestration that shortens time to care, reduces administrative load, and reliably turns intention into outcomes, while keeping accountability where it belongs.
Like a long run, improvement in healthcare is measured in steady splits and strong handoffs, not sprints. The goal isn’t replacing people; it’s returning time to care, reinforcing good judgment, and ensuring every step counts. The destination or the finish line keeps advancing, and with humans and agents aligned, that’s precisely the plan.
Stay tuned to Inside Angle for more AI thoughts and insights.
Utkarsh Tripathi is a senior AI and machine learning engineer at Solventum.