June 2, 2017 | Gretchen Mills
There is a lot of talk about “big data” and its impact on health care. So let’s dig deeper and ask:
“How can shared clinical data be used to create administrative efficiencies and improve quality?”
There is a common understanding that improving quality will lead to improved outcomes and lower costs—something upon which providers and payers can both agree. One challenge in achieving this goal is how to share more detailed clinical data to support quality improvement, such as peer comparisons of clinical best practices or administrative activities like automated approvals of surgeries that meet clinical medical necessity criteria. This kind of detailed data is buried in unstructured text in documents such as surgical notes and radiology reports.
Historically, payers have had access to large claim data sets but have not had access to the clinical details found in medical record documentation. Payers and providers devote significant clinician FTE time to providing detailed clinical information that supports medical necessity determinations for payer prior authorizations. This is a manual effort that requires significant nurse and physician time. Now picture an automated process in which the data is “mined,” meaning pulled from medical records and sent to the payer for an automated review of required elements. The results would then be presented for fast track approval of patients that meet criteria or a more detailed physician review for those that do not.
To advance clinical quality improvement, companies are using natural language processing (NLP) to mine the unstructured medical record data with clinical algorithms that uncover insights to help providers and payers identify patients/members who could benefit from care management interventions. These algorithms can pull together clinical data with social demographic information to identify patients who may need more support post-discharge (ex. patient/member lives alone and exhibits signs of early dementia).
This is just the tip of the iceberg, however. Getting even more granular with patient specific data, we can use artificial intelligence on unstructured data to text mine medical documentation for entire populations. In lay terms, this means applying algorithms based on pattern recognition from huge sets of data to truly transform the way health care is delivered. Unstructured data can be text or voice (e.g. dictated notes that are translated to text) or pattern recognition from radiologic images or lab results. One example of the application of NLP to population health is an Israeli firm that worked with Maccabi Health Service on mining 80,000 patient records to identify individuals at risk for colon cancer. At risk individuals were then referred for colon screening, which resulted in the detection of some positive findings of early, treatable colon cancer.
Gretchen Mills is manager of market strategy for populations and payment solutions at 3M Health Information Systems.