Banner
  • Utilizing Medical Malpractice Data to Mitigate Risks and Reduce Claims
  • Industry News
  • Access and Reimbursement
  • Law & Malpractice
  • Coding & Documentation
  • Practice Management
  • Finance
  • Technology
  • Patient Engagement & Communications
  • Billing & Collections
  • Staffing & Salary

EHR Interoperability Again and Again

Article

It's not political or market selfishness that impedes health data interoperability, it's the primitive nature of the technology itself.

The SearchHealthIT headline reads: "Health data interoperability woes could be solved fast if we wanted." Here's an excerpt:

" Health data interoperability problems that plague the U.S. healthcare system are technically simple to fix, said mHealth Summit speaker Anna McCollister-Slipp, a diabetic patient using multiple medical devices and taking many prescriptions to regulate her health. Tired of non-interoperable device data slowing her down, she became a health IT entrepreneur and co-founded Galileo Analytics ... Political forces in Washington and in the marketplace that protect the non-interoperability of data are holding back the free flow of health data between systems, McCollister-Slipp said."

I'm sorry to disappoint, but it's not political or market selfishness that impedes interoperability, it's the primitive technology itself. This is actually a good thing because if it were driven by political or anti-competitive considerations, there would be no hope at all.

To understand the issue one must look inside the devices and their computer chips, in this case (apparently) a glucose meter. The computer's memory is arranged in groups of 8 bits (binary digits) calked bytes. Each byte can contain one of 256 different patterns at a time, each representing a number from 0 to 255. Each pattern can also be assigned a letter equivalent - 97 = 'a.' These are the American Standard Codes for Information Interchange (ASCII). The ASCII standard was published in 1963. So, as you can see, in one sense interoperability was worked out long before most of today's programmers were born.

Two hundred and fifty-five codes aren't enough to represent all of the characters used in foreign languages, so the Unicode standard was born. It uses 16 bits (or more), a 16-bit code unit (two 8-bit bytes) for the 65536 characters that were assigned codes first and two additional 16-bit units if needed. As numbers, these bit patterns can't represent really big numbers. Thirty-two bits can only represent about 4 billion, but 64 bits can accommodate 2^64 (over 18 quintillion or 1.8×10^19).

Programming languages give developers lots of options to store numbers of different magnitude in different numbers of bytes but no options for attaching a meaning or definition to the number as it travels through programs and then out to the Internet. So, your glucose meter might register 450 and my peak-flow meter might also register 450. You know what yours means and I know what mine means, but the 450s don't. If they both arrive at our doctor's office they are worse than useless unless accompanied by metadata: what test, what units of measure, which patient, the date and time of the test, etc. This is a lot of stuff for a programmer to deal with. First, since it's not possible to just send the data, one must develop an entire infrastructure to support sending data. This leaves unanswered the questions of: send to whom, for what purpose, and in what format.

Let's imagine a different processor chip that stored information as Unicodes representing the data as characters, and a different language that understood this.

Today's programmer writes something like: "int glucose = 450;" which declares an integer variable called glucose and stores the value, 450.

In our new language the programmer writes: item glucose = 450; but, item declares a variable of type "information item." When setting the value, the computer consults a dictionary of type definitions and retrieves the one for glucose which says that in addition, it is to include the device id, the date and time, the units of measure and the type name (glucose) or a reference to the source of the definition. What gets stored is: <ii devid="436a6", tm="20140117T1130 -08", unit="gm/dl">450<\ii>.

In the world of this computer and language, all data have this form although some have more qualifiers and some have internal complexity. Blood pressure, for example, may have separate values for systolic, diastolic and mean and an indication of the method (cuff or machine).

Since the device is registered to you, when the data arrives at your doctor, your identity as well as the meaning of the 450 can be determined unambiguously.

The Web Ontology Language (OWL), a family of knowledge representation languages, is a standard developed by the World Wide Web Consortium (W3C) that exploits some of these ideas but in a slightly different way.

Given the right technology and with little extra effort, the interoperability boogyman would evaporate. Instead of numbers (data) we would have information. When we sent it somewhere it would mean something, which is after all, the point of interoperability.

 

Recent Videos
Three experts discuss eating disorders
David Lareau gives expert advice
Dana Sterling gives expert advice
David Cohen gives expert advice
David Cohen gives expert advice
David Cohen gives expert advice
Dr. Reena Pande gives expert advice
Dr. Reena Pande gives expert advice
Dr. Reena Pande gives expert advice
Dr. Reena Pande gives expert advice
Related Content
© 2024 MJH Life Sciences

All rights reserved.