Building a Discharge Summary You Don't Have to Wait For

How we built Particle's AI discharge summary, and what it actually does.

Most products in transitions of care are still organized around the hospital's discharge summary as the unit of truth. That model is flawed. Roughly half of discharges never produce one, the ones that do often arrive after the highest-risk window has already closed, and even when a summary lands on time, the relevant context is buried in hundreds of pages of consult notes, progress notes, orders, and labs.

The teams holding the post-discharge window are care coordinators, transitional care management nurses, and population health managers. They're deciding who to call first, who to escalate, and who is stable enough to leave alone. To do that, they need a clear picture of what happened during the stay, and they need it fast.

This isn't really a technology problem. It's a workflow and incentive problem. The team that produced the documentation isn't the team that needs it next, and there's nothing in the discharge process that obligates the inpatient side to optimize for the people downstream. Solving it from the receiving end means stopping the wait and assembling the picture independently.

What most of the market does, and what we did instead

Most vendors in this space stop at document retrieval. They surface the ADT, look for the discharge document, attach it if they find one. That's useful, but it inherits the underlying problem. When the document is late, missing, or thin, the care team has nothing better to work with than they did before.

We made a different call. The moment we detect a discharge, an AI model assembles a structured clinical summary from everything available on that visit: encounter records, diagnoses, procedures, the reconciled medication list, lab trends, vitals, imaging impressions, consult notes, and social history. If the hospital's discharge document exists, it's one input into that synthesis. If it doesn't, the output is the same.

The hard part of the AI work isn't generating the summary. It's reading across a length of stay that can produce dozens of clinical sections from different clinicians, progress notes, consult notes, OR reports, H&P write-ups, specialist assessments, often with overlapping or contradictory details, and reconciling them into one consistent view. A medication change might appear in three different sections, worded three different ways. The model resolves that into a single structured list, with each claim traced back to the source section with context that supports it.

The output is organized around what a care team is actually trying to figure out in the first few days after discharge:

  • Why was the patient admitted, and what happened during the stay?
  • What procedures were performed?
  • What medications were started, stopped, or continued, including dose changes when documented?
  • Are there specialist follow-ups, labs, or home health visits needed, and when?

Why a care team can trust it

The model isn't asked to generate new clinical insight or make a judgment call. It's organizing what's already in the record into a form a person can verify, and the citations make that verification one click away.

We also have clinical reviewers evaluate summaries against their source records, looking for the things automated checks miss: a medication that got dropped, a follow-up that was misattributed, a finding that was understated. What they find shapes how we structure the model's input on the next iteration. The goal is for the loop to be continuous and improved consistently. 

What this changes

The version of this product that retrieves and attaches the hospital's document still leaves the care team waiting on someone else's process. The version we built doesn't. The summary is there inside the window that matters, structured the same way every time, regardless of what the hospital sent or when.

That's the difference we set out to make.