Healthcare Integration Roundup – April 22, 2011

We’re starting a new feature on the Caristix blog: a quick weekly roundup of noteworthy news, articles, and comments on healthcare integration and healthcare IT.

This week we have interoperability in Stage 2 and 3, an interview on medical device integration, and thoughts on refills from EMRs.

Enjoy!

The cloud outage and its impact on EMRs via @EntegrationBlog
Yesterday, a data center belonging to cloud computing leader Amazon went down, taking dozens of web-based applications with it. Read about the 3 questions a medical practice should be asking if they’re in the market for an EMR.

Intertwined between the lines on cmio.net
Interoperability will be big in Stage 2 and Stage 3 requirements for Meaningful Use. But will the standards be ready?

Lab Interoperability Cooperative via @jhalamka
Announcing a new collaborative network for connecting hospital labs to public health agencies.

HIStalk interviews Carlos Nunez MD, Chief Medical Officer, CareFusion via @histalk
Read about the CareFusion vision for tying medical devices and IT to create actionable information at the point of care.

Not Fully Baked via @motorcycle_guy
The CDA Consolidation Project had three months to develop tools and produce an implementation guide. Read standards guru Keith Boone’s take on the project.

How an EMR can be shockingly inconvenient for prescription refills via @kevinmd
Thirteen (count ’em) steps to a prescription refill. A combination of poor GUI design and lack of integration?

Why Do HL7 Interfaces Take So Long to Write?

HL7 Interfacing Advice from the Indiana Health Information Exchange

The largest health information exchange (HIE) in the US is located in Indiana. It connects 80 hospitals, and serves 10 million patients as well as 19,000 physicians. IHIE members participated in a recent meeting of the Central Indiana Beacon Community and posted their slide decks here. The presentation that resonated for us was Mapping Interfaces by Dr. Mike Barnes, Dan Vreeman, and Amanda Smiley (PDF).

Their killer quote: “Interface programmers are the only barrier between a successful interface implementation and your organization appearing on the front page of your local paper.”

That’s part of the reason HL7 interfaces take so long to write. The authors also say that it’s not just about the writing, the coding, or the configuration. It’s about understanding the interface: scoping, gap analysis, and sometimes, getting the buy-in to change source systems. Their deck included a useful 9-step overview of the HL7 interface development process. Here it is, along with a few comments from us:

1. You have to get examples (+/- specs).
Get as many examples as you can. Cover as many HL7 messaging and workflow scenarios as needed. And get those interface specs if you can. But keep in mind they might be out-of-date.

2. You have to study them (in detail).
Study everything in detail. Make sure you have a system to track your findings. Have a repeatable gap analysis process to cover the examples and logs you might get. Just because one system is using PID segments as a primary patient identifier, doesn’t mean that another system won’t be using PV1 segments. For a deeper dive, we’ve got a white paper on HL7 integration and gap analysis.

3. You discover problems, or have questions.
Have a process for dealing with them. It’s not always easy because interfaces, HL7 messages, and system workflows are so disparate. What you learn with one provider’s EMR won’t port over to another provider’s system.

4. Request fixes or answers to questions.
It helps to document your problems and then those fixes.

5. Time passes (days to weeks).
You bet it does. That’s why it helps to document it all.

6. You forgot the details of the interface (Rpt #2).
There are those pesky details again. Documentation (and the ability to share) can really help.

7. Fix problem or resolve questions.
This is on-going. When you think you’re done with an interface after the go-live, it’ll come back to haunt you once one of the sending systems is tweaked. You’ll need troubleshooting tools and processes to account for changes.

8. Repeat steps 3-7 until interface is ready to write.

9. Write interface.
By now, writing the interface is a piece of cake. Especially if you’ve got solid integration technology that makes the actual writing easy.

The key takeaway… You can have the best interfacing engine in the world, the HIE infrastructure with the most bells and whistles, and the coolest SOA and ESBs. But a good interface starts with an analyst’s ability to dig into the details and document her findings.

Read the rest of the Mapping Interfaces PDF deck on the Indiana Health Information Exchange site.

Protecting Patient Data in HL7 Logs

Information Week ran an article this week on protecting patient data. The article wasn’t on one of the usual suspects — a HIPAA violation or a breach in a production system. Instead, this was notable because we’re finally seeing one of the hidden dangers in healthcare IT coming to light: unsecured patient data sitting in development and test systems. Our industry needs to start addressing this issue.

Information Week cited a survey where 51% of organizations don’t protect patient data used in software development and testing. Yet the per victim cost of data breaches in healthcare is $294, 44% higher than other industries. Read more on the Information Week site.

It’s a given: vendors and providers have to use real-world data to test applications, systems, HL7 interfaces, and connectivity. There’s no getting around that. Without test data — accurate, realistic test data — you don’t want to go ahead with the product launch, system go-live, or integration engine migration. There’d be too much at stake. Without a reasonable volume of real-world test data, you end up testing for too-good-to-be-true workflows and patients. The result is way too many bugs, enhancement requests, delayed projects, and (rightfully) irate clinicians down the road.

So oftentimes, the solution to robust testing is to copy over production data into the test system. There are times when providers end up sharing production data — for instance, HL7 message logs — with vendors. Under certain circumstances, these approaches can be fraught with security issues. So vendors and providers need to ensure they’re working within regulatory frameworks when they use production data for development and testing.

But instead of clamping down and setting up a governance structure that says, “Never, ever extract production data,” how about looking for ways to do it safely?

One way is to de-identify production data before porting it over to the test system. So you remove information that can identify patients while leaving real-world workflows intact. We’ve written about de-identifying HL7 data here. And provided a few de-identification definitions here.

Because of this issue, we’re working on an HL7 data de-identification tool — a data protection tool, if you will. If you’d like to learn when the software beta opens, please sign up here (don’t forget to check the beta notification box).

Comments

Any comments or insights on protecting patient data in test or development systems? We’d love to hear from you in the comments below, on Twitter, or by email (it’s at the top of this post).

Guest Post on Healthcare IT Guy

Jean-Luc Morin, VP R&D at Caristix, has a guest post on Shahid Shah’s blog, Healthcare IT Guy. Shahid’s top-ranked blog covers healthcare IT, EMR, EHR, PHR, medical content, and document management.

Jean-Luc’s article covers 6 questions hospital CIOs and IT directors should be asking vendors about interface documentation.

Here’s an excerpt:

At first glance, documentation doesn’t seem like a CIO-level concern. In a typical implementation cycle, the team gets a few specs from a vendor. Someone signs off on an interface configuration document. Then you implement and go-live. The role of documentation in the larger scheme of things? Seems like a project management checkbox at best.

But it isn’t.

Your vendor’s interface documentation practices can have a major impact on the speed and success of implementation and on the maintainability of your new system after go-live.

Go read the rest here…

Will HL7 V3 Adoption Take Off in 2011? 5 Points and 1 Caveat

A few weeks ago, I was working on an HL7 v3 project with an outside partner and the discussion turned to market adoption. We came to the conclusion that it’s not exactly taking off — at least, not as quickly as you might expect. Apart from meaningful-use initiatives around CDA in the US and the big push by Canada Health Infoway, I don’t really see much traction in North America. I’m going to come right out and say this: from a vendor perspective, the incentives to embrace the new standard are just not there in 2011.

My thoughts were kicked off by a post from HL7 guru René Spronk. René wrote that the focus to date has been on modeling and that implementation-related material is missing. René also listed 5 improvements that would help implementers adopt HL7 v3. He raises some great points. Even if the post is almost a year old, the list still holds… In a large volunteer organization like HL7, there are no quick changes.

So what exactly is limiting HL7 v3 adoption? To build on René’s list, here are a few more points to consider.

1. Cultural Hurdle: HL7 V.3 Implementation Needs a Change in Mindset

HL7 v3 is a completely different standard than HL7 v2.x. Because HL7 v3 isn’t backward-compatible, when you migrate, you need to change the way you look at interfaces. In the HL7 v2.x world, many analysts treat HL7 messages as strings of data elements. They see their key task as follows: look for the data, find it in the “right string”, and map the string to the right data element in the new system under deployment. What about the semantic gaps, you may ask? The answer in this world: we’ll worry about them during system validation and patch in a workaround. What’s more, in this world, message schemas are manually specified in Microsoft Word documents much of the time.

So from a provider perspective, HL7 v3 is going to demand a change in mindset. The message schema is programmatically specified and modified (if needed). If you’re a big Word fan, you’ll probably have a hard time finding your schema in a familiar format. Most likely, you’ll be encountering a new set of tools for data element mapping. This is a big shift from business-as-usual in the HL7 v2.x world.

2. Steep HL7 V3 Learning Curve

Interface analysts in the HL7 v2.x world are a mixed bunch. Some come from the clinical side of the business, others from the development side. A surprising number are non-technical (which is terrific — we need a range of skill sets in our industry). This works fine for now, since the HL7 v2.x format is quite straightforward. At a minimum, as long as you’re able to count pipes and basically grasp the data being exchanged, you’re good to go. You can start quickly with minimal training. With some help from basic open-source or home-grown tools, you don’t really need deep technical knowledge.

But migrate to HL7 v3 and the game changes. Again, you need a new mindset. The technical skills are different and the way you build or configure interfaces is different as well. Just take a look at this HL7 v3 primer. It’s perfect if you want to understand where the standard comes from. But it’s far too complex if all you need is to learn how to configure an interface. So there’s a learning curve, and it’s not just for analysts. Implementers need to create new material to support analyst learning. And that won’t happen overnight, especially if the market is slow to materialize.

3. No Clear Benefits for Most Healthcare Systems

Healthcare systems have been using v2.x interfaces for decades. Sure, the standard and related processes are issue-laden, and the typical interface deployment process is bumpy at best. HL7 v2.x is far from perfect. It’s probably more expensive and intense to run an HL7 v2.x environment than a future-state HL7 v3.

But most organizations have learned to live with these faults. For most organizations, HL7 v2.x works. It feels safe. It’s become standard business practice.

So HL7 v3 comes along. It’s unknown. And the unknown feels risky, right? Very few healthcare systems are going to jump in with both feet as long as the ROI remains elusive. From what I’m seeing in the field, organizations are not rushing to spend money to fix what amounts to a non-broken process.

The Caveat: CDA and Document Exchange

Despite what I said about the unknown, there is a pretty encouraging caveat to all this. And that is the push for CDA templates and the need for document/data exchange. This is new, and driven by meaningful use in the US, so organizations have to act. HL7 v3 seems to be answer here, and I predict greater adoption related to data exchange.

4. HL7 V2.x Compatible Interfaces Still Needed

Obviously, supporting HL7 v3 doesn’t mean you’re going to be done with HL7 v2.x. HL7 v2.x interfaces are going to stick around, well into any foreseeable HL7 v3 future. Some will start by integrating v3 components, but vendor and hospital infrastructures will remain HL7 v2.x centric over the course of migration. Even if the upgrade benefits were clear, the process could take over 20 years. Meanwhile, related HIS technology will keep rolling along and evolving, and vendors would get stuck with even more to support.

5. Missing Tools

As we speak, the HL7 v3 toolset is thin on the ground, compared to the rich pickings in the v2.x world. We need a more robust toolset in order to get to the productivity levels and skill sets that HIT vendors and healthcare providers expect. Tools will emerge as adoption grows. But for now, the missing toolset is yet another risk for early adopters to manage.

What’s Next for HL7 V3?

Now that I’ve said my peace, don’t get me wrong.

HL7 v3 is an elegant step forward in healthcare system interoperability. But the design quality of the standard isn’t a compelling migration driver. We’ve been using HL7 v2.x for so long that we’re used to its weaknesses. For us to shift, we’re going to need stronger, clearer market drivers. Change — even good change — is risky within organizations. We shouldn’t underestimate our own reactions to risk.

Comments

Readers, what are your thoughts? Does HL7 v3 adoption touch a nerve? Let’s hear it in the comments.

Happy Holidays from Caristix

Wishing you a happy, healthy, and joyous holiday season and New Year!

Caristix 2010 Holiday Card

De-identifying Patient Data, Part 2

Definitions

We’re continuing with our series of posts on patient data de-identification. This week, we’re reviewing a set of definitions of common terms. This list will become the glossary for upcoming posts on HL7 de-identification and protecting sensitive healthcare data. We’re looking for feedback on this list. Feel free to add your nuances and/or related terms in the comments…

De-Identification or Anonymization

An umbrella term for removing or masking protected information. In a more specific sense, the de-identification process removes identifiers from a data set so that it’s no longer possible to relate information back to individuals. In the context of healthcare information, de-identification occurs when all identifiers (IDs, names, addresses, phone numbers, etc – see our previous HL7 de-identification post for a complete list) are removed from the information set. This way, patient identity is protected while most of the data remains available for sharing with other people/organizations, statistical analysis, or related uses.

HL7 data anonymization

Pseudonymization

A subset of anonymization. This process replaces data-element identifiers with new identifiers so that the relationship to the initial object is replaced by a completely new subject. After the substitution, it is no longer possible to associate the initial subject with the data set. In the context of healthcare information, we can “pseudonymize” patient information by replacing patient-identifying data with completely unrelated data. The result is a new patient profile. The data continues to look complete and the data semantics (the meaning of the data) is preserved while patient information remains protected.

HL7 data pseudonymization

Re-Identification

This process restores the initial information to a pseudonymized data set. To re-identify data, you would need to use a series of reverse mapping structures constructed as the data is pseudonymized. There are a few use cases for re-identification. One example would be to send the pseudonymized data to an external system for processing. Once the processed information is returned, it would be re-identified and pushed to the right patient file.

HL7 data security and privacy

Identifiers

Identifiers are data elements that can directly identify individuals. Examples of identifiers include but are not limited to name, email address, telephone number, home address, social security number, medical card number (see previous post for a complete list of HIPAA identifiers). In some cases, more than one identifying variable is needed to identify an individual uniquely. For example, the name “John Smith” appears multiple times in the White Pages. However, you need to combine the name with a telephone number to identify the right John Smith.

Quasi-identifiers

These are data elements that do not directly identify an individual, but that provide enough information to significantly narrow the search for a specific individual. Some quasi-identifiers have been studied extensively. These include gender, date of birth, and zip/postal code. Quasi-identifiers are highly dependent on the type of data set. For example, gender will not be a meaningful quasi-identifier if all of the individuals are female. Another interesting thing about quasi-identifiers: they are categorical in nature, with a finite set of discrete values. In other words, gender, birth dates over a period of less 150 years, and address are finite. This makes searches simple. Individuals are relatively easy to pinpoint using quasi-identifiers.

Non-identifiers

These data elements may contain personal information on individuals, but they aren’t helpful for reconstructing the initial information. For example, an indicator on whether an individual has pollen allergies would most likely be a non-identifying data element. The incidence of pollen allergy is so high in the population that it would not be a good discriminator among individuals. Again, non-identifier data elements are dependent on data sets. In a different context, this data element might enable you to identify individuals.

What other data de-identification terms should we define?

De-identifying Patient Data, Part 1

In healthcare IT, no matter where you work, you’re faced with protecting patient data. Many countries have regulatory frameworks to address patient privacy and the use of health information. In the US, HIPAA regulates the use of PHI (protected health information). In Canada, the law is called PIPEDA (Personal Information Protection and Electronic Documents Act). PIPEDA regulates the use of consumer data in a number of industries, not just healthcare. Plus a few Canadian provinces have their own privacy legislation in place.

Regardless, data breaches cost healthcare organizations a staggering $6 billion annually, in the US alone.

So how do you protect patient data? Let’s hone in on one data protection technique: de-identification. Data de-identification is essentially a way to mask or replace personally identifiable information (PII) and protected health information (PHI). On occasion, HL7 analysts need to share or redistribute HL7 production data. One use case is the need to port realistic data to a test system or staging area.

So what do you need to know in order to de-identify HL7 log data?

  1. To begin with, you’ll need to list the sensitive data identifiers you’re dealing with. The Department of Health and Human Services (HHS) provides a HIPAA Privacy Rule booklet (PDF) that highlights the 18 HIPAA identifiers. Each identifier is a category of data you need to protect. The list goes way beyond names, addresses, social security numbers, and health plan numbers. You’ll need to pay attention to device identifiers and even IP addresses. Ensure that your de-identification technique covers all 18 identifiers.
  2. To be safe, use techniques that don’t permit re-identification.
  3. Make sure you map identifiers to HL7 fields and segments. This will vary from one system to the next. You’ll want to have the ability to trace which message components will be impacted by changes before you hit that OK button (or the equivalent) on your de-identification tool.
  4. Ensure the data remains useful. One of the issues with traditional randomization techniques is that scrambled data may not be plausible. Overall meaning in the message flow should be preserved. You don’t want to be able to identify patient John Smith, but you want to make sure he isn’t discharged before he’s admitted — so the patient’s overall record should remain as-is.

Further Reading on Protecting Patient Data

Your Comments

We’ve just touched the tip of the de-identification iceberg here. Are there other issues we should be keeping an eye out for? Let everyone know in the comments.

What if HL7 interface specifications were easy to document?

Some organizations call them HL7 conformance profiles, others call them HL7 interface specifications. They’re all talking about a description of the data format used for exchange between systems within a care facility.

The terminology might not be consistent, but the challenge is: documentation. How do you document conformance profiles so that the description is up-to-date and trustable?

3 HL7 Documentation Issues

1. Creation is Time Consuming

Documenting an HL7 specification requires a fair amount of effort. Applications exchanging data through the HL7 standard usually exchange several data elements. Each data element has its own definition and plays a role within your organization’s workflows. For instance, in HL7 v2.6, the PID (patient ID) segment contains 39 different fields; the PV1 (patient visit) segment contains 52. Each field contains several potential components and sub-components. Obviously, an interface analyst might not need to document them all. But analysts still need to invest several hours and days of work digging in there. Before coding an interface, they need a thorough understanding of how each data element interacts within each data flow, and they need to document it all.

2. Multiple Customers, Multiple Needs

Another fact that makes HL7 specification documentation challenging: different stakeholders might need the documentation for different purposes. For instance, the interface group would need it for daily internal work. A 3rd party vendor might need it to configure a new system installation. The vendor would probably need partial documentation limited to just a few data elements. In turn, you might want to limit sharing to just a few specific details per element. You could be looking at maintaining several documents containing slightly different information. That brings us to next point…

3. HL7 Documentation Is Hard to Maintain

Once you create the documentation, the cold hard fact is… you need to keep it up to date if it’s going to retain any value. Workflows change over time, and how your systems collaborate change as well. If anything, outdated and misleading documentation is worse than no documentation at all. But manually updating a pile of specifications with multiple flavors is a lot work.

HL7 Documentation Made Easier

Now, what can we do to avoid all this work and improve the overall quality of HL7 interface specifications? One solution is to centralize all conformance profile documentation in a single place – say, Excel. You then generate all your documents from a single source. That way, you update once, and pull reports as you need them. Excel filtering capabilities can be pretty useful here.

Obviously, Excel wouldn’t give you sophisticated formatting or document versioning. But this could be a first step towards better management of your organization’s valuable knowledge assets. The next steps would involve leveraging this new asset to improve processes and become more productive… but that’s another (big) discussion for another day 😉

15 Soundbites from the Canada-US eHealth Summit

I was in Philadelphia this week for the 2nd Annual Canada-US eHealth Summit (PDF). Now, this wasn’t a big conference like HIMSS. But it was a great opportunity to break open our silos and share knowledge and stories across the border. Plus, we got to hear from a handful of respected HIT leaders, including John Glaser, CEO of Siemens HIT, and David Levine, head of a major Canadian regional health agency.

Here are 15 highlights from the presentations.

  1. Ripped from the headlines: 1 out of 7 Medicare patients who are hospitalized face a major adverse event(involving risk of death). Another 1 out of 7 face a moderate adverse drug event (NY Times, 11-16-2010).
  2. Government is going for the “triple aim”: better health for the population, lower costs through improvement and better use of technologies, and better care for individuals.
  3. Contemporary quality improvement = rapid development cycle and rapid implementation.
  4. There is a tremendous need for data collection (PPACA: Patient Protection and Affordable Care Act).
  5. Hospital-acquired infections are still a major concern.
  6. Canadian healthcare market = $200 billion. About 2% goes to IT investments, stakeholders are looking to get it up to 4%.
  7. IT benefits in terms of savings and quality are now a given (little skepticism left among stakeholders). We’re now in implementation mode.
  8. For a status update of where HIT stands in Canada, check out the Canadian Medical Association website.
  9. The major driver for transforming healthcare is the payment model.
  10. “Quality will now determine your payment.” For this to happen, we need analytics capabilities.
  11. Prediction: decision support tools will grow significantly.
  12. Mobility could be helpful (iPhones, iPads, the new Blackberry Playbook…) but does not really work for clinicians: care happens in a physical space.
  13. Telemedecine in Ontario is advanced (the network they have is pretty impressive).
  14. Savings are extremely significant (travel and hospital visits).
  15. Wireless solutions bring lots of challenges: connections not reliable enough, most hospitals are going back to wired solutions in Pennsylvania.

The organizers mentioned that the presentations will be posted on the Pennsylvania eHealth Initiative website.