In the last two chapters, we covered some of the requirements-related artifacts you need. Now it’s time for testing, which you conduct at different phases in the interface lifecycle: during configuration and development; during the formal validation phase; and during maintenance.
“Why test?,” you ask. When you start to develop and iterate on your interface, you run tests to avoid introducing new problems – you check and test your code to make sure not injecting errors. This is true both during interface development or configuration and while in maintenance mode. This testing helps you determine whether or not the interface makes sense and meets your requirements.
Once you’re satisfied with the interface, you move to validation testing. This is when you determine if the interface will work with and meet the requirements of your clinical workflow. Specifically, you test performance, extreme data cases, and how well the interface supports large volumes. By figuring this out before go-live, you save a lot of implementation headaches, and alleviate the time clinicians need to spend helping you validate the interface once you go live.
What to Look for in a Test Tool
So that’s when and why you should test. But how do you handle this efficiently? The key is to automate your tests. While you need to spend time during the development/configuration phase setting up the tests, during the validation phase, you can take advantage of automation and save a lot of time. In fact, some interface engines include built-in test tools. Regardless of the source of your test software, make sure you can do the following:
- Be able to connect to web services or a database, such as by calling a web service, and check in the database after sending a message
- Validate inbound and outbound messages
- Validate ack and nack
- Generate values and test messages from a profile or specification, and generate a large volume of data/messages if you’re conducting volume testing
- Repeat test plans/scenarios, and create reports
What to Test
So what test scenarios should you use? You need to test both normal use cases and edge cases.
That said, before you can conduct any testing, you must understand what to expect of your workflows. This should include common workflows – such as a patient being transferred – involving the use of the products that will be interfaced. For example, in many hospitals, emergency department and in-patient ADTs are two separate systems. A new patient that comes through the emergency department would be registered in the ED’s ADT first. And if she is transferred to Med/Surg, you would need to populate the main ADT, either through an interface or manually by re-entering the data.
Or if you’re creating an interface to move patient charge data from a surgical information system to a billing system, you would need test scenarios in which:
- Patient demographics and patient ID are incomplete.
- Billing item information is incomplete.
With that understanding in place, you can test to make sure the interface engine behaves as expected for standard – as well as unexpected – workflows. When it comes to edge cases, you’ll need to consider more possibilities. For example, if your interface engine does not accept a certain range or type of data, you’ll need to send such data to it – e.g., a date of birth of 1850 or entered in reverse – and see if the interface triggers an error.
During testing, you’re testing the data format and confirming that you’re not introducing errors. When you code an interface, your specification will be based at least in part on sample messages. By definition, you know that these messages work. So don’t use only these sample messages in your texts. Let’s say your test patient in your sample messages is called John Smith – with four characters in the first name. You test your interface using these sample messages, and everything works. But three months from now, your hospital admits a patient named Benjamin O’Donnell, only no one tested for 8 characters in the first name and an apostrophe in the last name. The interface doesn’t like it, and you have a support call (and a none-too-happy clinician) to handle.
By automating your testing, you will feel freer to test at any time and you’ll be more confident about making changes because you’ll know you can easily test each time you change the interface as you’re coding.
Some vendors provide validation guides full of test scenarios. Use them. But check through them first – your workflows may differ.
Make sure that your tests cover your interoperability requirements, and include the following:
1. Workflow. Confirm the interface engine handles your standard workflows as expected.
2. Edge cases: unexpected values. If you’re testing birth dates, include 1899 as well as 2017. Include dates with the month and day reversed. Try different abbreviations for the days of the week. Check all caps on names. Check accented names. Check hyphenated last names, and those with an apostrophe.
3. Performance, load, and network testing. Though interface developers don’t normally test network infrastructure, you may want to do this during the validation phase to see how workflows and data are impacting overall infrastructure performance. A high-volume interface may need more load testing than a low-volume interface, depending on your interface engine and connectivity infrastructure.
4. Individual systems. You should test each system on its own, kind of analogous to unit testing in software development. For instance, in addition to making sure the surgical and billing systems handle workflow end to end, make sure they work separately.
Create a Test System
Once you’ve developed a test plan and test scenarios, you need to configure your interface in a test system. It’s important that you do this in a test system, not a production system. It’s easy to think it can’t hurt to test in a live system, but here are three reasons why that’s a big mistake:
- If you forget to cancel or delete all test transactions once you’re through with testing, you’ll end up with faulty transactions in your production system.
- You run the risk of impacting ePHI or HIPAA-protected health data.
- You don’t want phantom data turning up in a CMS audit. Your clinical systems contain data that constitute a legal record.
So what’s the right way to go about it? Set up your test system using the same configuration as your production system, including the same rights and versions (it’s OK if IP addresses are different). Make sure you upload enough patient data, and that your tests cover your requirements (we can’t say that often enough).
As part of the testing process, you’ll want to run reports. The reports should document the following:
- Number of times the test was run, as well as test duration – if you’re sending messages, this helps you understand performance.
- Test results, including positive validations and failures.
- The messages that were used; note the data source (SQL queries pulling from a database, an HL7® message feed, a batch file).
- Summary of test scenarios that were run.
Message Player for Basic Listening and Routing
When conducting development testing during the interface configuration phase, you need a basic listener/receiver tool as you are writing your interface. This allows you to play/test messages without implementing your interface engine in a production system. In fact, some interface engines come with a built-in player for testing. If you don’t have one, you can use Caristix Message Player (it’s free) to send or receive messages. Read about how we use Message Player here.
[caristixbtn style=”orange” text=”Download Message Player” url=”https://hl7-offers.caristix.com/download-message-player/” blank=”1″]
Why You Need These Artifacts
Test scenarios and reports make it possible for you to iterate more accurately and verify functionality immediately as you develop and test your interface. This not only saves you time, it helps ensure a better interface at go-live. Plus, it enables traceability so you can more easily troubleshoot and determine who is responsible for addressing any issues as you work with vendors and HIE partners.
Your Feedback Welcome
We’ll be publishing chapters from the HL7® Survival Guide over the upcoming weeks and months. See a topic that needs more detail? Have a different perspective on interfacing and interoperability? Tell us in the comments!
Read More in the HL7® Survival Guide
Chapter 1: How to Integrate and Exchange Healthcare Data
Chapter 2: Pros and Cons of Interfacing Capabilities
Chapter 3: The Heart of the Matter: Data Formats, Workflows, and Meaning
Chapter 4: How to Work with Vendors and Developing Your EHR Strategy
Chapter 5: Vendors, Consultants, and HL7® Interface Specifications
Chapter 6: Interfacing Artifacts: HL7® Conformance Profiles or Interface Specifications
Chapter 7: Interfacing Artifacts: Gap Analysis
Chapter 8: Interfacing Artifacts: Test Scenarios and Test Systems
Chapter 9: Interfacing Artifacts: Message Samples and Test Messages
Chapter 10: Process and Workflow
Chapter 11: Maintenance, Troubleshooting, and Monitoring
Chapter 12: Definitions
Chapter 13: Contributors and Resources