As the CTO/CIO at a company that specializes in providing our clients with clinical intelligence I am surprised at how often I come across simplifications of the problems of interoperability.
I’m often asked questions like; “doesn’t interoperability simply mean physicians and other providers will be able to look up all of the patient’s records across the longitudinal span and pull the records into their EMR?”
Well, yes, kind of. But there are obstacles.
Take a patient Kyle who presents with terrible pain secondary to his prostate cancer, and who has a multitude of medical records spread around the community as well as several visits to specialist cancer centers. His is a complex case and as such the idea of a simple filter through a list of his records probably isn’t going to cut it. Let’s think about why.
First, duplication is a major issue. It is very common to see reports that are copies of other reports, some of which may be embedded inside other reports . When no source document can be found it can be very difficult to determine which of these reports to use and which to discard. Is the report preliminary or final? Was it retracted? These are difficult questions that must be answered by opening the report and reading the content.
Think of the sheer volume of data that we are asking physicians to review; much of which is not relevant. Can you imagine the amount of time that will be spent inside a prior record screen trying to work out what to include and what to exclude? If we’re looking longitudinal over the life of a complex patient it really isn’t hard to imagine that they’ve had hundreds of records consisting of encounters, procedures, labs, genetic testing, and so on. Even the best filtering in the world is going to make any such system extremely difficult to use.
We must also remember there is also a legal aspect to the selection of records. Hospitals have been sued for not taking the time to review the entire record set, and receiving records that they have not requested explicitly puts them in a difficult position. It’s at times like these that going through some kind of clearinghouse can be useful, as it gives you a way to have control over the data that you will see.
This situation will be exacerbated in the world of medical imaging where data sets large and storage, while cheap, is often in short supply. It’s better in these situations to have a formally defined protocol of the types of images that the institution is willing to receive rather than to blindly accept every single study, releva and then have to put it through a radiology workflow.
My belief is that the answer here is clinical summarization; a topic that can be broken into two separate categories.
“Extractive summaries” that pull sections of data from within records and present a kind of consolidated roadmap that improves the reviewer’s experience. They do not alter the content of the record in any way, but rather seek to draw attention to the more important parts of the record and reduce cognitive overload.
“Synthetic summaries” that take the data within the record and attempt to boil it down into completely new content through techniques such as natural language generation. These are still in their infancy, but show great promise. They must be used with care in our litigious society.
Which ever technique is used it is clear that until this form of summarization is in place interoperability can only increase the amount of work that the intake office must perform to onboard a new patient.
Long live data! Let’s not drown in it though.
 Comparison of automatic summarisation methods for clinical free text notes. Artif Intell Med. 2016 Jan 21. pii: S0933-3657(16)00005-1. doi: 10.1016/j.artmed.2016.01.003.
 Quantifying clinical narrative redundancy in an electronic health record.