Introducing an Interfacing Management Maturity Model

It’s clear that the needs around interfacing and integration are exploding. To match that need, Caristix is introducing an interfacing management maturity model. If you’re reading this, you know that there is a lot of implementation expertise available from analysts, developers, and consultants. But at the organization level, capabilities vary. Many organizations are seeking benchmarks to see where and how they can grow and adapt to meet their needs. That’s where our model can help.

This interfacing management maturity model is not about the engine or the standards. This model zeroes in on organizational capabilities. We look at an organization’s ability to use interfacing to meet their needs around data exchange, interfacing, and interoperability. The key question: how does a team scalably support their organization’s needs to share or exchange data while controlling costs? As organizations mature, they aim to act quickly to meet strategically driven integration needs, support ongoing operations, and support initiatives such as Meaningful Use.

Below are descriptions of what teams do at each stage. Over the following weeks, we’ll be filling out this model with further points.

Getting Started with Interfacing and Integration

Before you hop on the interfacing management maturity curve, there is a learning curve to get through.

The learning curve with integration is deep and steep. Many smaller vendor organizations who haven’t yet had to deal with interfacing or data exchange start here. CIOs, integration architects, and VPs of R&D have to make calls that are going to decide the future of their organizations or businesses here. They make critical architecture decisions at this stage. The key activity at this stage is learning and training. And the key deliverable is fundamental: integration architecture.

So it pays to get educated. It pays to get training and bring on a consultant with healthcare integration expertise to help with the decision-making.
 

Manual Stage

Once you’re past the Learning Stage, you start to get your feet wet with building interfaces.

Building and coding the first few will seem intuitive. If the team is working with a modern interface engine, it should be.

At this stage, you might see little or need to gather requirements or do any scoping. Some analysts reach out for sample messages from either a system vendor or the hospital team. They will make do with just a dozen or so — one or two of each message type you’ll be needing for the project.

Once the interface is built, the developer connects the systems. And surprise: the interface doesn’t work. Messages aren’t transmitted, they populate unexpected fields. So the analyst and the developer spend time validating and fixing defects. It’s nearly impossible to predict end dates and timelines, and the team (or client) doesn’t have a firm grasp on the effort required to get to a production-grade interface.

Message Stage

Once you’re past the Manual Stage, you’ve learned about the power of scoping and requirements gathering.

At this stage, you’re building out interfaces and interfacing capabilities based on message analysis. To gather interface requirements, the team works at the message level to slice and dice to uncover exceptions to a spec or standard. Sample questions: are there z-segments? Are there exceptionally long field lengths?

Requirements-gathering is based on getting to good-enough – say about 80% of the way.

Unfortunately, 80% isn’t good enough. Requirements gathering during the Message Stage follows an unexpected (and unwelcome) twist on the 80-20 rule, turning it into the 80-20-80 rule.

80-20-80 goes like this:
•    Analysts uncover 80% of interfacing requirements by slicing and dicing at the Message level.
•    Developers get the remaining 20%  during interface coding.
•    But that hidden 20% will account for 80% of actual coding work

The project slows down because of rework and a need for extensive validation. Validating the interface is  characterized by an unpredictable number of versions and iterations –because the team can’t realistically predict effort or project duration. If time isn’t a factor and low-volume interfacing is acceptable, the team can stay in the Message Stage indefinitely. Likewise, if you’re maintaining stable interfaces and not building new ones or updating source and destination systems, remaining at the Message Stage may work for the organization.

The System Stage

After the Message Stage comes the System Stage. This is a conceptual leap.

For most teams, it’s a mental-model leap to go from messages and message validation to system analysis and working directly with specs or profiles.  Many analysts and their managers are used to thinking in terms of message analysis.

But the ones who are leaping forward to profiles are seeing gains in productivity.

For instance, some profile builders (such as the one we build in Workgroup) enable analysts to run through a large volume of messages  — in the 100,000 to 1,000,000 range. This enables analysts to capture requirements more completely and cover more use cases contained in the data. As the interface is developed, it needs much less rework what you find in the Message Stage. As a result, testing and validation is smoother. The bottlenecks encountered in message-based validation disappear in the Profile Stage. Projects are more predictable, with fewer iterations and less project effort. Project managers can confidently hit target dates with allocated resources.

 

Your Feedback

We’ll be adding to the model over the coming weeks. We’ll address the process deliverables, metrics, and organizational needs and impact. Are there other topics you’d like to see? Let us know in the comments.

Download the HL7 Survival Guide

Download the HL7 Survival Guide