Workgroup

Introducing CaristixTM Workgroup

Caristix Workgroup is designed to help interface analysts and engineers to manage the entire interfacing lifecycle. Workgroup provides the following features and functionality:

Scoping and Configuration

HL7 Messaging

Testing

Collaboration

System Requirements

  • Operating system: Windows 7, Windows 10, 32 or 64-bit editions
  • Memory: 4Gb of RAM
  • .NET Framework 4.7.2

Table des matières

Getting Started

Install and Register Caristix Workgroup Software

  • Install Caristix Workgroup by clicking on the installation file (.msi file) you received.
  • Launch the software, and fill out the EmailFirst NameLast Name and Organization fields in the registration form.
  • Click the Activate button.Registration
  • If you have a trial version, you will need to purchase an annual license to continue using Workgroup after the end of the trial period.
  • After the registration step is completed, you will have to Access a Library

Managing Your Files

Workgroup File Management

You can add documents (Word, Excel, PDF documents, etc.) to the Library.  You can do so using one of the following ways:

Import document

  • Navigate to the Document Library
  • Right-click the folder you want to add the document to
    Note: You can also create a new folder by right-clicking the parent folder and select New –> Folder
  • Click Import Document…
  • Select the document(s)

Documents will be uploaded to the library and made available from Workgroup.

Drag document to the library

  • Just like drag-and-drop in Windows Explorer, open a folder on your machine and open the Document Library in Workgroup.
  • Select the document(s) or folder(s) and drag them to the destination folder in Workgroup.

Documents and folders will be uploaded to the library and made available from Workgroup.

Add a shortcut to a local document

  • Navigate to the Document Library
  • Right-click the folder you want to add a shortcut to
    Note: You can also create a new folder by right-clicking the parent folder and select New –> Folder
  • Click New –> Shortcut
  • Select the document(s)

Document(s) will not be uploaded to the server and will only be available from your computer. Other user from the same library will see the shortcut, but won’t be able to open it. This will act as a normal shortcut in Windows.

Actions

There are actions that can be performed on the library via the Main Menu’s Action section, the right-click contextual menu (right-click a node or blank space), and the Gear icon beside the search bar.

When there is no document highlighted, the available actions are:

  • Sort By [Date, Name, Type]: select a document to sort in ascending order. Select again to sort in descending order.
  • New > Folder: creates a new root folder.
  • Refresh Library: refresh the entire library.

When a document is selected, the available actions depends on the document type. Common actions are:

  • Open: Open the selected document with the default editor (Profile Explorer, Scenario Explorer, etc).
  • Open With: Open the selected document with any application that can handle the document type.
  • Send To:
    • Desktop: Send a copy of the document to your computer’s desktop.
    • Mail Recipient (file): Send a copy of the document as an email attachment.
  • Copy: Copy the document.
  • Cut: Cut the document.
  • Paste: Paste a previously copied document.
  • Rename: Rename the selected document.
  • Delete: Delete the selected document.
  • Refresh Library: Refresh the entire library.

Profiles: Scoping and Updating Interface Specifications

The foundation of Caristix software is profiles. Profiles are another word for interface specifications, specs, or conformance profiles. They are a way to capture the data formats and code sets you need for exchanging information between systems. Profiles provide a list of message types (or trigger events), segments, fields, components, sub-components, data types, and data tables that are specific to a system. The profiles you develop with Caristix software can be used to:

  • scope and document the systems in an interface: that’s the core function of a profile
  • validate an interface: use a profile to execute tests
  • update documentation
  • query your messages that are based on that profile
  • de-identify messages that are based on that profile

How to Build a Profile or Specification

You can either build a spec manually by reading sample HL7 messages over the course of a few days, or you can use Caristix software to automatically build one for you, using the reverse-engineering functionality in our software. Learn about the tasks related to building, scoping, and updating specifications as follows:

Creating a Profile

The Role of Profiles

In Caristix software, profiles serve as interface documentation. The Library is a repository for all interface specifications: HL7 reference specifications (which come built into Caristix Workgroup software), product specifications, and specifications for the customized mapping and configuration that must occur for working interfaces as well as any other type of documentation file.

There are several ways to create a profile or specification:

Create Profile Based on HL7 Reference

Copy a Reference Profile

This method is useful when you have a large volume of message types and trigger events to document, based on a specific HL7 version. If your specification is more limited, consider building a profile from individual message elements.

  1. From the Documents screen, right-click the HL7 reference standard appropriate to the new profile you want to create.
  2. Select Copy.
  3. If needed, create a new Folder, right-click and select NewFolder.
  4. On the destination folder, right-click and select Paste.
  5. A new profile labeled Copy of HL7 v2.x.cxp appears. This profile includes all messages, segments, fields, and data types from the HL7 version you selected.
  6. Rename the profile.

Workgroup_CopyProfile_v3

You will need to edit the profile to reflect the specification. Go to Editing a Profile to learn more.

Create Profile from Message Elements

A Profile from Trigger Events, Segments, and Fields

You can also build a profile from individual message elements. This method is useful when the specification you are building is limited to a small subset of an HL7 version and when customization is extensive

  1. Navigate to the Documents pane, and right-click on a folder.
  2. Select New > Profile > Blank Profile.
  3. A new profile is created. Rename the profile.
  4. Double-click on the Profile to enter the Profile Explorer.
  5. Add trigger events and segments to build out the profile. There are two ways to do this, as follows.

 

Add an Event from an Existing Profile

You can add a trigger event or message type from one of the HL7 references or from a previously built profile.

  1. In the Documents pane, double-click on the profile you want to build out.

  2. In the Profile Explorer, right-click on the first node.

  3. Select Import, Trigger Event... In the Import from a Profile window, select the source Profile you want to import event from. A new window, Import selection, opens. (Click to enlarge image below)

Workgroup_ImportSelection_TriggerEvent

 
Select the message types you would like to create. Note that you can expand the tree view to select individual events. In the Import mode section, you can select the type of import you want to perform.
 
ModeWhy Choose This Option
ActionExample
Import only missing definitionsChoose this if you only want to import element that don’t already exists in your profileThis will import definitions that are not present in the current profile and all referenced elements.Your profile doesn’t have a ADT_A01 trigger event you’d like to add from HL7 v2.6.
Replace all definitionsChoose this if you need to replace all existing definitions with the imported definitions.Replace existing elements by imported elements. This means that you’ll overwrite current definitions. The segment definition will change to the imported definition.Your profile has an ADT_A08 definition that would like to replace by the one from v2.6.
Blend definitionsChoose this if you need to import a definition from another profile, but also need to keep all definitions from both profiles.This will import all selected and referenced definitions and will duplicated all elements that are different.Your profile has a custom ADT_AZZ definition from one source system. A second source system uses a different definition. You need to code an interface for both definitions.

 

Add a New, Undefined Event

You can add an event or message without segments, fields, associated data types, or tables. These elements must be defined later. Use this method when the event to be specified has not been formally defined in the HL7 standard.

  1. In the Document pane, double-click on the profile you want to edit. Right-click on the first node and select Add, Trigger Event. A new trigger event is added.

    Conformance_Add_TriggerEvent

  2. Rename the trigger event and add a description.

Once you have added trigger events, you can edit segments, fields, and data types within your profile. See Editing a Profile for more information.

Create a Profile from Messages

Reverse Engineering

The Reverse Engineering tool enables you to create a profile from an HL7 log  (or HL7 message file). A profile (also known as a specification or message definition) documents the message structure and content, including the use of Z-segments and custom data types.

Setup: Choose Log Files

To open the Reverse-Engineering tool, click PROFILE v2, New, With Reverse-engineerer Wizard... The tool opens to Choose Log Files.

  • Select messages from files
    1. Click the Add button to load one or more HL7 logs.
    2. Optional: Check Use Large File mode  when loading files above 10MB in size. (This option is selected automatically if file size reaches 25 MB)
  • Alternative: Select messages from a database or Connector
    1. Click on the Database tab and select a Data Source. Click on Sources… to configure a Data Source.

Then click Next to go to the next step. You can also load messages by querying a database.

Workgroup_ReverseEngineering_LogSelection

Setup: Choose a Reference Profile

To begin building a profile based on the messages you just loaded, the software needs an established profile to compare against. Select a profile that most closely matches your messages, then click Next. (Note: the software picks up on the HL7 version specified in your messages, but you are free to choose another reference).

Workgroup_ReverseEngineering_ReferenceProfileSelection

Setup: Filter Messages

The messages load.

(If they load too slowly, you can click the Cancel button in the Loading dialog box and only messages that have loaded thus far will appear.)

If there are files, events, segments, or other data elements you don’t require for the profile, filter them out in this step (read Filter an HL7 Log to learn more), then click Next to go to the next step. To reverse-engineer all messages without filtering, simply click Next.

Workgroup_ReverseEngineering_FilterMessages

Setup: Sending and Receiving Application Filters

This step is optional. The software will detect all sending and receiving applications present in the messages. If only one combination is detected, this step is skipped.

You have two options here. You can either generate a single profile combining all applications represented in the message file, or you can create separate profiles for each sending and receiving application combination. The second option offers you the possibility to choose specific combinations; it will also run the next 5 steps consecutively for all selected combinations.

Workgroup_ReverseEngineering_SendingAndReceivingApplicationFilters

Step 1: Initialize New Profile

The software sets up the reference profile and messages you selected. Once the processing is complete, simply click Next to continue, as specified on-screen.

Step 2: Options

Choose between Basic and Advanced field analysis.

Basic Field Analysis

This choice lets you analyze fields and data values and assign known data types. If Conformance finds data values and fields that do not match known data types, an new data type will be assigned. You can manually edit the data types later, when the reverse-engineering profile appears in the Library.

Select Basic Field Analysis if:

  • you are not sure that data types are important to your analysis.

  • you want to speed up your analysis and focus on identifying details in other message elements such as events and segments.

Advanced Field Analysis

This choice lets you fully analyze fields and data values. Data values and fields that do not match expected data types will be flagged. You will have the opportunity to either create custom data types to handle non-HL7-compliant data, or assign an existing data type.

Select Advanced Field Analysis if:

  • you need complete data type analysis for your interfacing project

  • you are comfortable creating new data types for further analysis

Data and Field Options

This section allows you to set more specific options for data and field analysis.

Once you make your selection in Step 2, click Next.

Step 3: Analyze Messages and Segments

The software reads through the messages and segments to begin building the profile. When processing is complete, click Next to continue, as specified on-screen.

Step 4: Analyze Fields and Data Types

This step creates the field structure in your profile, assigns data values to user tables, and associates data types to fields and values.

Step 4: Analyze Fields and Data Types – Basic Mode

If you selected Basic Field Analysis in Step 3, Basic Mode appears in Step 4. Workgroup processes the fields and data types automatically. When the processing is finished, click Next.

Step 4: Analyze Fields and Data Types – Advanced Mode

If you selected Advanced Field Analysis in Step 3, Advanced Mode appears in Step 4. Workgroup analyzes each segment for data values and fields that do not match expected data types. In other words, the software automatically performs a conformance check. When non-compliant elements are flagged, the software automatically suggests a data type and field structure. You can accept the suggestion, assign another data type, or create a new data type to handle the non-compliant values and fields.

Description

  • Add a description or notes if needed.

Data Type

  • Select a data type from the drop-down list.
  • Or click the New link to create a new one. If you click New, the Data Type Creation dialog box opens.
    Workgroup_ReverseEngineering_DataTypeCreation
  • Select a data type from the drop-down list, click the items in the Data Type Details pane to edit as appropriate.

Edit as needed to reflect maximum field length

Usage

Specify usage.

Flagged Data Values

This tab provides a list of the data values that were flagged as non-compliant, as well as how many times they were found in the messages.

When processing is complete, click Next to continue.

Step 5: Collect Message Flows (Optional)

This step will collect analyze the message flows in your logs (if you select this option at step 2). These message flows will be stored into the profile and available for future uses, to generate test messages for example.

Step 6: Save New Profile

This is the final step in the Reverse-Engineering wizard. Specify a folder to save the profile to or browse your computer to save it locally. Name the profile. And provide a description if needed. Click Save to close the Reverse-Engineering wizard and go to the Documents pane. (If multiple Sending and Receiving Applications were selected, the wizard will start a new analysis on Step 1)

Filter an HL7 Log

Remove Unneeded Data Values, Trigger Events, and Segments

When the reverse engineering wizard is run, you have the option of filtering out unneeded data values, trigger events, and segments. These data elements may not be needed for the profile you are creating, despite their presence in the HL7 message log.

Trigger Events and Segments

  1. Click the Trigger Events or Segments tab to select, then click check-boxes to select or unselect specific elements.
  2. Selected message elements automatically appear in the Messages area.

Data Filters

Data filters let you set up queries to find messages containing specific data. Queries can filter on specific message building blocks: segments, fields, components, and subcomponents.

Filter Operators

OperatorAction
isIncludes messages that contain this exact data
is notExcludes messages that contain this data
= < > =< >=Filters on numeric values
likeCovers messages that include this data somewhere in the element (ex: 42 in 4342, 3421, 4286)
presentLooks for presence of a message element (such as segment, field, etc.)
emptyLooks for unpopulated message elements (such as a segment, field, etc.)
inFilter on multiple data values in a message element rather than a single value
regex syntax.NET regular expression syntax, equivalent to wildcard expressions

 

Building Filters

  1. In the Messages area, look for the field containing the data you want to filter on. It could be a patient name, a date, a location, or another string. Right-click within the field. A menu appears.
  2. Click Add Data Filter. The filter is automatically created within the Data Filters Area, and the data is highlighted within the Messages area.

 

Data Sorting

The data sorting functionality lets you set up sort queries on data values.

  1. In the Messages area, look for the field you want to sort. Right-click within the field. A menu appears.
  2. Click Add Sort. The sort query is automatically created within the Sort area. Change the data order under the Order column, and change the query order using the up and down arrows.

 

Manage Search and Filter Rules File

You can use an existing Search and Filter Rules file or save newly created rules throughout the Reverse Engineering filtering step. To do so, right-click anywhere in the Data Filters, Sorts or Data Distributions section.

Import a Profile

In order to use a profile created in another installation of the application, you will need to import the file.

  1. Right-click on a Folder and select Import Document…
  2. Browse to the profile file you want to import.
  3. Click Open. The file loads.
  4. The profile is added to the folder.

Editing a Profile

Editing Tasks

After creating a profile, you will need to edit it. There are three main editing tasks: editing existing message elements, adding new elements, and deleting elements you no longer need.

Add Segment

There are two ways to add segments, depending on your needs. You can either add a segment defined in the profile you’re working on, or add one from a different profile.

Start here:

  1. In the Documents screen, double-click on the profile you want to modify. The Profile Explorer appears.
  2. Right-click on the first node, select Segments…
  3. Click Add Segment. Choose either New… or From Profile… as explained below.

Option: Add Segment –> New

To create a new Segment definition, click on Add SegmentNew. A new Segment definition appears at the bottom of the list.

You can also create a copy of an existing Segment definition by right-clicking on the source definition, select Copy and then right-click again and select Paste. A new Segment definition appears at the bottom of the list.

Option: Add Segment –> From Profile

  • To add a segment based on a different profile, click From Profile….
  • In the Import from a Profile, select the Profile to import segments from.
  • The Import Selection dialog box appears. Select the segments you wish to add and the import mode to perform. The next section explains the difference.

Workgroup_ImportSelection_Segments

 

ModeWhy Choose This Option
ActionExample
Import only missing definitionsChoose this if you only want to import element that don’t already exist in your profile.This will import definitions that are not present in the current profile and all referenced elements.Your profile doesn’t have a PID segment you’d like to add from HL7 v2.6.
Replace all definitionsChoose this if you need to replace all existing definitions with the imported definitions.Replace existing elements with imported elements. This means that you’ll overwrite current definitions. The segment definition will change to the imported definition.Your profile has an XPN definition that you would like to replace with the one from v2.6.
Blend definitionsChoose this if you need to import a definition from another profile, but also need to keep all definitions from both profiles.This will import all selected and referenced definitions and will duplicate all elements that are different.Your profile has a custom ZOD definition from one source system. A second source system uses a different definition. You need to code an interface for both definitions.

Add Segment Groups

    • In the Documents screen, double-click on the profile you want to modify then expand the tree view to open the trigger event you need to change.
    • Right-click on it and select Add, Segment Group.
    • Rename the group and edit the description and other attributes.


    Conformance_AddSegmentGroup

 

Add Data Types

This is useful when you need to add a new data type for a Z-segment or a custom field.

  1. In the Documents screen, double-click on the profile you want to modify. The Profile Explorer appears.
  2. Right-click on the first node and select Data Types…
  3. Mouse over any data type, and right-click. A menu appears.
  4. Select Add Data Type. Choose either New… or From Profile… as explained below.

Option: Add Data Type –> New

  • To create a new data type for your profile, click New… .
  • A new data type appears at the bottom of the Data Type Library. In the Configuration pane, rename the data type, add a description, and edit attributes (maximum characters and basic type).

Conformance_EditProfile_DataTypeConfiguration

  • You can also create a copy of an existing data type definition by right-clicking on the source definition, selecting Copy and then right-clicking again and selecting Paste. A new data type definition appears at the bottom of the list.

Option: Add Data Type –> From Profile…

  • To add a data type based on a different profile, click From Profile….
  • In the Import from a Profile, select the Profile to import data types from.
  • The Import Selection dialog box appears. Select the data types you wish to add and an import mode. See table below for import mode choices.

Workgroup_ImportSelection_DataTypes

 

ModeWhy Choose This Option
ActionExample
Import only missing definitionsChoose this if you only want to import elements that don’t already exist in your profile.This will import definitions that are not present in the current profile and all referenced elements.Your profile doesn’t have a TS (time-stamp) data type you’d like to add from HL7 v2.6.
Replace all definitionsChoose this if you need to replace all existing definitions with the imported definitions.Replace existing elements with imported elements. This means that you’ll overwrite current definitions. The segment definition will change to the imported definition.Your profile has an HD definition that would like to replace by the one from v2.6.
Blend definitionsChoose this if you need to import a definition from another profile, but also keep all definitions from both profiles.This will import all selected and referenced definitions and will duplicated all elements that are different.Your profile has a custom TS definition from one source system. A second source system uses a different definition. You need to code an interface for both definitions.

Add Tables

This is useful when you need to add a new table for a Z-segment.

  1. In the Documents screen, double-click on the profile you want to modify. The Profile Explorer appears.
  2. Right-click on the first node and select Data Tables… The Table Editor appears.
  3. Mouse over any table section, right-click and select Add New Table and then a section where you want to add the table.

    Conformance_AddTable
  4. Edit the ID, Name and entries.

Workgroup_EditTable

Edit Segments and Fields

Edit segments and fields, so you capture the data elements pertinent to your specification. Due to the nature of the HL7 standard (HL7 is object-oriented), any changes made are global changes and affect the entire profile.

There are two ways to access segments and fields:

1. Edit within Each Message

Click the “+” sign to expand a message, then edit the segment.

Workgroup_EditSegment1
2. Edit from the Segment Library

Right-click a message, and select Segment... A separate window displays the Segment Library. Expand the segment you wish to edit by clicking the plus sign.

Conformance_EditSegment2

To edit each field or individual component, click on the title. Under the Configuration tab, make the changes to each field attribute.

Edit Data Types

  • In the Documents screen, double-click on a profile, right-click on the first node and select Data Types… The Data Type Library appears.
  • Click a data type to expand. In the Configuration pane, edit attributes.

Workgroup_EditDataType

Edit Tables

    • In the Documents screen, double-click a profile, right-click on the the first node and select Data Tables… The Table Editor appears.
    • Click a table to expand. In the Configuration pane, edit attributes.

    Workgroup_EditTable

Delete Trigger Events

This is useful when you want to reduce the profile to relevant trigger events.

  1. In the Documents screen, double-click on the profile you want to edit. Expand the tree view by click the plus (+) sign next to the profile.
  2. Use the Delete key on your keyboard to delete the events you don’t need in the profile. Note: you can keep hitting the Delete key multiple times to remove a batch of trigger events.

Field Validation

From the Validations tab, you can configure a set of rules that validate message content (data) is conform.

Configure

Example – MSH.7 – Date/Time of Message

In the following example, the rule will validate (and raise conformance gaps) if the MSH.7 of a message does not conform to the format “yyyy-mm-dd hh:MM:ss”
 Field Validation - MSH.7

Operators

Operators let you define validation rules whether they range from simple to complex.

Operators List

Operator

Action

isValid that contain this data
is notValid that does not contain this data
=Valid with an exact match to this data (this is like putting quotation marks around a search engine query)
<Less than. Covers validating on numeric values.
<=Less than or equal to. Covers validating on numeric values.
>Greater than. Covers validating on numeric values.
>=Greater than or equal to. Covers validating on numeric values.
likeValid if includes this data. Covers validating on numeric values.
presentLooks for the presence of a particular message building block (such as a field, component, or sub-component)
emptyLooks for an unpopulated message building block (such as a field, component, or sub-component)
inBuilds a filter on multiple data values in a message element rather than just one value.
in tableLooks if the data is in a specific table of the Profile.
matching regex

Use .NET regular expression syntax to build validations. To be used by advanced users with programming backgrounds. Learn more about regular expressions here:

This is also a quite good utility to hep you create complex regular expressions:

JavaScript Validation

HL7 Message JavaScript Engine API

The JavaScript engine allows you to create custom validation rules, which will be used during the conformance validation of your HL7 messages.

You can add custom javascript validation rules at the profile, trigger-event, segment and data-type levels. The javascript rules will be evaluated during the HL7 message validation, depending on the element of the message being validated.

Profile: Validation rules added at the Profile level will be evaluated first and only once per message.

Trigger-Event: Validation rules added at the Trigger-Event level will be evaluated only once per message and will only be evaluated for matching messages. The MSH.9 – Message Type is used to match messages and trigger-events.

Segment: Validation rules added at the Segment level will be evaluated for each instances of the segment in a message.

Data-Type: Validation rules added at the Data-Type level will be evaluated for each instances of the data-type in a message.

By using the callback() method, you can notify the message validator when an error has occurred. You can provide callback() with an error message as a string, or with a ValidationError object.

HL7 Message Validation Context

During HL7 message validation, the JavaScript engine context is updated, allowing you to access the current element being validated.  The context has the following properties you can refer to:

  • profile: Allows you to fetch data from profile. See the Profile object definition.
  • message: Allows you to access the message being validated and any of its properties or methods. See the Message object definition
  • segment: Allows you to access the current segment being validated and any of its properties or methods. See the Segment object definition.
  • field: Allows you to access the current field being validated and any of its properties or methods. See the Field object definition.
  • component: Allows you to access the current component being validated and any of its properties or methods. See the Component object definition.
  • subComponent: Allows you to access the current sub-component being validated. See the SubComponent object definition.
  • dataType: Allows you to access the current data-type instance being validated. The dataType can be any FieldComponent or SubComponent.

 

ValidationError

The ValidationError allows you to return a customized validation error in the callback method. The ValidationError object exposes the following properties and methods:

Constructors

ValidationError()

Returns a new, empty ValidationError.

var validationError = new ValidationError();
callback(validationError);
// Returns a new ValidationError object in the callback method.

Properties

summary: string

A summary of the error.

var validationError = new ValidationError();
validationError.summary = 'Invalid Medical Number';
// The validation error's summary should be 'Invalid Medical Number'

description: string

A detailed description of the error.

var validationError = new ValidationError();
validationError.description = 'PID.3 does not contain a valid MR - Medical Number for the patient';
// The validation error's description should be 'PID.3 does not contain a valid MR - Medical Number for
// the patient'

Methods

toString(): string

Returns the JSON string value of the ValidationError.

var validationError = new ValidationError();
validationError.description = 'PID.3 does not contain a valid MR - Medical Number for the patient';
var validationErrorString = validationError.toString();
// validationErrorString should be '{ "description":"PID.3 does not contain a valid MR - Medical Number for the patient"}'

Synchronizing a Profile

When you publish a profile report to Word, you may need to edit descriptions in Word then save those edits to the corresponding profile. This is done using the Synchronize function.

  1. In Microsoft Word, edit the report. You can edit text descriptions and information in tables. Do not edit headings or titles.
  2. Save the document in Word.
  3. Close the document.
  4. Navigate to Caristix Workgroup. In the Documents view, click PROFILE, Synchronize… .
  5. Select the Word document you just edited and click Open. This will save changes you made in tables and description field in the Word document back to the original profile. (Note that the .docx document will not be opened in Microsoft Word in order to synchronize it)

 

The synchronization feature uses internal Word document markups so it can relate any change to the right profile section.  When updating the document, make sure the document structure is preserved. It is suggested that you experiment with this functionality before starting document updates on a large scale.  For instance:

  • New sections (new trigger events, segment or else) added will not contain required internal markups, so will not be synchronized back to the library.
  • When adding new elements to a table, make sure you add a new row to the table (not just add a new line).
  • Copying sections using copy/paste would potentially duplicate internal markups.

Using Extra Content

What is Extra Content?

Extra Content enables you build profiles that include more than the official HL7 content.

Basic profiles, without Extra Content, enable you to define message-related structure and content through trigger events, segments, fields, tables, etc. In turn, each of those elements are described through attributes such as Sequence, Name, Optionality, etc. software include set of attributes describing profiles and profile entities. Extra Content lets you add new elements and new attributes.

For instance, you may want to add a change history table to a profile, in order to track changes over time. Or you might want to add an extra column to store source descriptions for code set values. Both of these can be added using Extra Content. This content will be displayed as part of the profile, exactly the same way standard HL7-related elements and attributes are displayed.

What is an Extra Content Template?

An Extra Content Template is a set of extra elements and attributes that you bundle together.

The Extra Content template itself doesn’t contain any data. Instead it defines the containers (or placeholders) for your data. An Extra Content Template represents the structure of the content you add to a profile. You can set up a Template and use it across one or more profiles. Once a profile is associated with an Extra Content Template, you can enrich the profile definition by populating the Extra Content areas.

How Does an Extra Content Template Work?

Please refer to the following sections for more information:

Manage Extra Content Templates

Manage Extra Content Templates through the Extra Content Library. To access the Extra Content Library:

  • Go to the Documents pane
  • Select PROFILEManage Extra Content Template…

From the Extra Content Library (Manage Extra Content Templates window), you can:

Create Extra Content Template

To create a new Extra Content Template, open the “Manage Extra Content Template” window.

  • Click New
  • Name the template
  • Click OK

Build your templates by adding Extra Content to profile sections as follows:

Extra Content for Profile

Add text, images, and grids to the Profile description area.

Profile Section

  • SEQ: Set the section’s order in the profile’s definition tab.
  • GROUP: You can regroup multiple sections in a tab-page control. Type or select a name that already exists to assign the section to a group.
  • DISPLAY ON REPORTS: If checked, the section will be part of the Word Document.
  • SECTION NAME: The name of the section.
  • TYPE: The type of the section. See types below.

Section Types

Add a New Text Area

  • In the upper-right tab panel, select Profile Section.
  • Click Add… A new row is added.
  • Modify the sorting sequence if needed.
  • Modify the grouping identifier if needed. Sections with the same grouping identifier will be presented together with each section being a tab.
  • Indicate if the section should appear in Word Document Report.
  • Give it a name by clicking in the name cell and typing in a short phrase name. This name is going to be displayed in profile description area as the name of this new text area.
  • Select Text as type.
  • Click OK.

Once you go back to the profile, you can enter text in the Profile description area.

Add a New Image

  • In the upper-right tab panel, select Profile Section.
  • Click Add… A new row is added.
  • Modify the sorting sequence if needed.
  • Modify the grouping identifier if needed. Sections with the same grouping identifier will be presented together, each section being a tab.
  • Indicate if the section should appear in the Word Document Report.
  • Give the new image a name by clicking in the name cell and typing in a short phrase name. This name is going to be displayed in profile description area as the name of this new image area.
  • Select Image as type.
  • Click OK.

Once you go back to the profile, you can add an image. To do so, click the Browse… button and select the image you want to include.

Add a New Grid

  • In the upper-right tab panel, select Profile Section.
  • Click Add… A new row is added.
  • Modify the sorting sequence if needed.
  • Indicate if the section should appear in the Word Document Report.
  • Give the new grid a name by clicking in the name cell and typing in a short phrase name. This name is going to be displayed in profile description area as the name of this new grid.
  • Select Grid as type.
  • A new table Column appears below.  This table is used to configure the grid columns.
  • Click Add… to add a column to the grid.
  • Modify the sorting sequence if needed.
  • Modify the grouping identifier if needed. Sections with the same grouping identifier will be presented together with each section being a tab.
  • Give the new column a name by clicking in the name cell and typing in a short phrase name.
  • Select the new column type.
    • String: column contains regular text
    • Checkbox: column contains checkboxes (check as needed in the profile description area)
    • Date:  column contains dates.  Includes a date picker.
    • List:  column contains picklist of values. These are the only valid values for this column.
    • Table:  Similar to List column.  However, the list of valid values comes from a table defined within the profile.
  • For some column types, you need to provide additional information such as a list of valid values or a profile table name.  Provide them as requested.
  • Repeat as needed for each column in the new grid.
  • Click OK.

Once you go back to the profile, you can add data to your new grid.  To do so, click the Add… button to create new grid rows.

Extra Content for Profile Elements

You can add Extra Content embedded next to the HL7-defined profile elements. This is a quick way to display needed profile data such additional descriptions, items to validate, business and mapping rules, etc.

Add a New Text Column

  • In the upper-right tab panel, select Field (or Segment and Segment Group or Table, depending on the element you want to modify). You’ll see the attributes/columns that are already present. Grayed-out columns are standard and cannot be modified.
  • Click Add… A new row is added
  • Modify the sorting sequence if needed
  • Indicate if the column should appear in the Word Document Report
  • Give it a name by clicking in the column name cell and typing in a short phrase name. This name is going to be displayed as the column header.
  •  Select the column type: String
  • Click OK

You are now ready to populate the new column with text:

  • Expand the selected profile
  • Select a segment so fields are listed (choose a different profile element as needed)
  • The new column is now added to the field grid.  You can enter text.

 

Add a New List Column or Picklist

List columns are useful when you’re able to define valid values for the column — in other words, a picklist.

  • In the upper-right tab panel, select Table (or Segment and Segment Group or Field depending on the element you want to modify). You’ll see the attributes/columns that are already present. Grayed-out columns are standard and cannot be modified.
  • Click Add… A new row is added
  • Modify the sorting sequence if needed
  • Indicate if the column should appear in the export to Word Document
  • Give it a name by clicking in the column name cell and typing in a short phrase name. This name is going to be displayed as column header.
  • Select the column type: List
  • A new table Values appears below.  Populate this table with your picklist values as follows:
    • Click Add… A new value row is added
    • Give the new value a label.
    • Enter the value in the VALUE cell
    • Repeat as needed
  • Click OK

Next, populate the profile:

  • Expand the selected profile in the Library
  • Select a field with an associated table

The new column is now added to the table content.  You can pick values from the picklist to assign values to the cell.

Delete Extra Content Template

To delete an Extra Content Template, open the “Manage Extra Content Template” window.

  • Select the template to be deleted
  • Click Delete
  • A warning message appears; click OK
  • Click OK again.

Note: Extra Content Templates are linked to the data within profiles. If you delete an Extra Content Template, all associated data within your profiles will be deleted as well.

Modify Extra Content Template

You can modify templates at any time so you can continue to enrich your profiles, as follows:

Note: If you delete an Extra Content Template element, this component will be deleted in every profile associated with this template.  Learn more about deleting Extra Content Templates.

Rename Extra Content Template

To rename an Extra Content Template, open the “Manage Extra Content Template” window.

  • Select the template to be renamed
  • Click Rename…
  • A new dialog box appears, enter the new name
  • Click OK

Copy Extra Content Template

To copy/duplicate an Extra Content Template, open the “Manage Extra Content Template” window.

  • Select the template to be copied
  • Click Copy…
  • A new dialog box appears, enter the name of the new template
  • Click OK

Copying an Extra Content Template can be quite useful when you want to modify an existing template without impacting all associated profiles.  Create a new but similar template, and then migrate profiles to the new template one by one.

Copying is also a way to “backup” a template before modifying it.

Assign Extra Content To Profile

Link an Extra Content Template to a profile as follows :

  • In the Document Library tree, select the profile
  • Right-click on it and select Assign Extra Content Template…
  • Select the Extra Content Template you want to assign to this profile
  • Click OK

You can now add Extra Content to your profile based on the newly assigned template.

Unlink Extra Content Template From Profile

Unlink an Extra Content Template from a profile as follows:

  • In the Documents screen, select the profile
  • Right-click on it and select Assign Extra Content Template…
  • Select None if you want to remove any extra content
  • Click OK

Importing Profile Containing Extra Content

Workgroup automatically manages Extra Content Templates when you import profiles. If the template is not already available, it will be imported along with the profile.

Extra Content and Gap Analysis

Extra Content can be included in the Gap Analysis process.

Ensure that both profiles are using the same Extra Content Template. Extra content will automatically appear in the list of attributes available for Gap Analysis. Learn about Gap Analysis attributes.

Profile Reports

Generate profile reports of an interface specification:

  1. Under the Documents screen, right-click a profile and select Export Profile, To Word Document… A new screen opens.

    Workgroup_Export_WordDocument

  2. Select the trigger events, segments, data types, and tables you want to include in your report. Click Apply.
  3. Then browse to the destination to save the .docx document and enter a File name. Click Save.
  4. Microsoft Word opens, and you’ll be asked to update fields; click Yes. The report is displayed.
  5. Scroll to navigate within the report.
  6. Click the hyperlinks within the document to specific sections within the reports for segments, data types and tables.
  7. You can open the document directly using Microsoft Word 2007 or later.

Note: You can also synch your profile.  This feature allows a user to update the Word document directly and synchronize the profile library with the upload document content.

Edit Selected Element’s Attributes

The Attributes tab describes an element’s attributes.

RESTRICTED VALUES:Optional. Restrictions are used to define acceptable values for XML attributes.


Types and Schematrons

From the Actions menu, you’ll have access to:

XML Type Editor

Overview

Complex types describe the permitted content of an element, including its element and text children and its attributes. A complex type definition consists of a set of attribute uses and a content model. The types of content model include element-only content, in which no text may appear (other than whitespace, or text enclosed by a child element); simple content, in which text is allowed but child elements are not; empty content, in which neither text nor child elements are allowed; and mixed content, which permits both elements and text to appear. A complex type can be derived from another complex type by restriction (disallowing some elements, attributes, or values that the base type permits) or by extension (allowing additional attributes and elements to appear).

XML Type Editor in Workgroup works as follows:

XML Type Editor Overview

Create a New Type

  1. Right-click in the Types tab, the select Create New Type
  2. In the Create New Type dialog, you need
    1. Name: Specify the name of the newly created type
    2. Based on: Check if you want your new type to inherit from another type.
      1. Extension: Allow additional attributes and elements to appear.
      2. Restriction: Disallow some elements, attributes, or values that the base type permits. 

 

Edit an Existing Type

The Types tab describes the structure of a type. You can add the following elements to the structure of a type.

Element:Element A complex element is an XML element that contains other elements and/or attributes.
Element Group:ElementGroupReferenceThe group element is used to define a group of elements to be used in complex type definitions.
Sequence:SequenceThe sequence element specifies that the child elements must appear in a sequence. Each child element can occur from 0 to any number of times.
Choice:ChoiceXML Schema choice element allows only one of the elements contained in the declaration to be present within the containing element.

The Definition tab describes an element’s properties.

Name:Specifies a name for the element. This attribute is required if the parent element is the schema element.
Type:Optional. Specifies either the name of a built-in data type, or the name of a simpleType or complexType element.
Min Occurs:Optional. Specifies the minimum number of times this element can occur in the parent element. 
Min Occurs:The value can be any number >= 0. Default value is 1. This attribute cannot be used if the parent element .
Default:Optional. Specifies a default value for the element (can only be used if the element’s content is a simple type or text only).
Fixed:Optional. Specifies a fixed value for the element (can only be used if the element’s content is a simple type or text only).
Description:Optional. Describes the element in natural language.

The Attributes tab describes an element’s attributes.

SOURCE:Specifies the attribute’s owner.
ID:Specifies a unique ID for the attribute.
TYPE:Optional. Specifies a built-in data type or a simple type. The type attribute can only be present when the content does not contain a simpleType element.
USE:Optional. Specifies how the attribute is used. Can be one of the following values:

 

  • optional – the attribute is optional (this is default)
  • prohibited – the attribute cannot be used
  • required – the attribute is required
DEFAULT:Optional. Specifies a default value for the attribute. Default and fixed attributes cannot both be present.
FIXED:Optional. Specifies a fixed value for the attribute. Default and fixed attributes cannot both be present.
DESCRIPTION:Optional. Describes the attribute in a natural language.
RESTRICTED VALUES:Optional. Restrictions are used to define acceptable values for XML attributes.

XML Schematron Editor

Overview

Schematron is a rule-based validation language for making assertions about the presence or absence of patterns in XML trees. It is a structural schema language expressed in XML using a small number of elements and XPath.

Schematron is capable of expressing constraints in ways that other XML schema languages like XML Schema and DTD cannot. For example, it can require that the content of an element be controlled by one of its siblings. Or it can request or require that the root element, regardless of what element that is, must have specific attributes. Schematron can also specify required relationships between multiple XML files.

Constraints and content rules may be associated with “plain-English” validation error messages, allowing translation of numeric Schematron error codes into meaningful user error messages.

XML Schematron Editor in Workgroup works as follows:

XML Schematron Editor Overview

An Introduction to Schematron*

The Schematron schema language differs from most other XML schema languages in that it is a rule-based language that uses path expressions instead of grammars. This means that instead of creating a grammar for an XML document, a Schematron schema makes assertions that are applied to a specific context within the document. If the assertion fails, a diagnostic message that is supplied by the author of the schema can be displayed.

One advantages of a rule-based approach is that in many cases modifying the wanted constraint written in plain English can easily create the Schematron rules. For example, a simple content model can be written like this: “The Person element should in the XML instance document have an attribute Title and contain the elements Name and Gender in that order. If the value of the Title attribute is ‘Mr’ the value of the Gender element must be ‘Male’.”

In this sentence the context in which the assertions should be applied is clearly stated as the Person element while there are four different assertions:

  • The context element (Person) should have an attribute Title
  • The context element should contain two child elements, Name and Gender
  • The child element Name should appear before the child element Gender
  • If attribute Title has the value ‘Mr’, the element Gender must have the value ‘Male’

In order to implement the path expressions used in the rules in Schematron, XPath is used with various extensions provided by XSLT.

It has already been mentioned that Schematron makes various assertions based on a specific context in a document. Both the assertions and the context make up two of the four layers in Schematron’s fixed four-layer hierarchy:

  1. phases (top-level)
  2. patterns
  3. rules (defines the context)
  4. assertions

Assertions

The bottom layer in the hierarchy is the assertions, which are used to specify the constraints that should be checked within a specific context of the XML instance document. In a Schematron schema, the typical element used to define assertions is assert. The assert element has a test attribute, which is an XSLT pattern. In the preceding example, there was four assertions made on the document in order to specify the content model, namely:

  • The context element (Person) should have an attribute Title
  • The context element should contain two child elements, Name and Gender
  • The child element Name should appear before the child element Gender
  • If attribute Title has the value ‘Mr’, the element Gender must have the value ‘Male’

Written using Schematron assertions this would be expressed as

TypeTestText
Assert@TitleThe element Person must have a Title attribute.
Assertcount(*) = 2 and count(Name) = 1 and count(Gender)= 1The element Person should have the child elements Name and Gender.
Assert*[1] = NameThe element Name must appear before element Gender.
Assert(@Title = 'Mr' and Gender = 'Male') or @Title != 'Mr'If the Title is “Mr” then the gender of the person must be “Male”.

 

If you are familiar with XPath, these assertions are easy to understand, but even for people with limited experience using XPath they are rather straightforward. The first assertion simply tests for the occurrence of an attribute Title. The second assertion tests that the total number of children is equal to 2 and that there is one Name element and one Gender element. The third assertion tests that the first child element is Name, and the last assertion tests that if the person’s title is ‘Mr’, the gender of the person must be ‘Male’.

If the condition in the test attribute is not fulfilled, the content of the assertion element is displayed to the user. 

Each of these assertions has a condition that is evaluated, but the assertion does not define where in the XML instance document this condition should be checked. For example, the first assertion tests for the occurrence of the attribute Title, but it is not specified on which element in the XML instance document this assertion is applied. The next layer in the hierarchy, the rules, specifies the location of the contexts of assertions.

The Assert type element is used to tag positive assertions about a document.

The Report type is used to tag negative assertions about a document.

Rules

The rules in Schematron are declared by using the rule element, which has a context attribute. The value of the context attribute must match an XPath Expression that is used to select one or more nodes in the document. Like the name suggests, the context attribute is used to specify the context in the XML instance document where the assertions should be applied. In the previous example the context was specified to be the Person element, and a Schematron rule with the Person element as context would simply be

IdAbstractContext
 FalsePerson

 

Since the rules are used to group all assertions together that share the same context, the rules are designed so that the assertions are declared as children of the rule element. For the previous example, this means that the complete Schematron rule would be

The element Person must have a Title attribute.
The element Person should have the child elements Name and Gender.
The element Name must appear before element Age.
If the Title is "Mr" then the gender of the person must be "Male".

 

This means that all the assertions in the rule will be tested on every Person element in the XML instance document. If the context is not all the Person elements, it is easy to change the XPath location path to define a more restricted context. The value Database/Person, for example, sets the context to be all the Person elements that have the element Database as its parent.


Patterns

The third layer in the Schematron hierarchy is the pattern, declared using the pattern element, which is used to group together different rules. The pattern element also has a name attribute that will be displayed in the output when the pattern is checked. For the preceding assertions, you could have two patterns: one for checking the structure and another for checking the co-occurrence constraint. Since patterns group different rules together, Schematron is designed so that rules are declared as children of the pattern element. This means that the previous example, using the two patterns, would look like

The element Person must have a Title attribute.
The element Person should have the child elements Name and Gender.
The element Name must appear before element Age.
If the Title is "Mr" then the gender of the person must be "Male".

 

The name of the pattern will always be displayed in the output, regardless of whether the assertions fail or succeed. If the assertion fails, the output will also contain the content of the assertion element. However, there is also additional information displayed together with the assertion text to help you locate the source of the failed assertion. For example, if the co-occurrence constraint above was violated by having Title=’Mr’ and Gender=’Female’ then the following diagnostic would be generated by Schematron:

From pattern "Check structure":
From pattern "Check co-occurrence constraints": 
Assertion fails: "If the Title is "Mr" then the gender of the person must be "Male"." 
at /Person[1] ...</>

 

The pattern names are always displayed, while the assertion text is only displayed when the assertion fails. The additional information starts with an XPath expression that shows the location of the context element in the instance document (in this case the first Person element) and then on a new line the start tag of the context element is displayed.

The assertion to test the co-occurrence constraint is not trivial, and in fact this rule could be written in a simpler way by using an XPath predicate when selecting the context. Instead of having the context set to all Person elements, the co-occurrence constraint can be simplified by only specifying the context to be all the Person elements that have the attribute Title=’Mr’. If the rule was specified using this technique, the co-occurrence constraint could be described like this

If the Title is "Mr" then the gender of the person must be "Male".

 

By moving some of the logic from the assertion to the specification of the context, the complexity of the rule has been decreased. This technique is often very useful when writing Schematron schemas.

*[Reference: www.xml.com/pub/a/2003/11/12/schematron.html]

Gap Analysis

What is Gap Analysis?

Gap analysis is an HL7 interface scoping activity. When you build an HL7 interface, before jumping into the code, you need to understand what data you are going to play with. Most importantly, you need to understand the differences between source and destination systems at the messaging level. Before jumping into integration engine configuration, you need to know what to configure. Some of your questions are likely to include the following:

  • Are there any differences between the message structures in each system? If so, what are they?
  • Are there any mandatory data elements on one side that are optional on the other? If so, what are they?
  • Do both systems use the same code sets? Are they the same values? Which values do I need to map?
  • Is the data semantically consistent? In other words, does the meaning or significance of an element always match across both systems?

These are often challenging questions to answer.

The Gap Analysis functionality in Workgroup helps you identify these differences in a matter of a few seconds. Gap Analysis enables the following:

Find Gaps

Gap Analysis Setup

Determine gaps between 2 profiles or between a profile and a set of messages.

Compare 2 Profiles

To list the differences (including differences in data structure and data content) between two profiles, follow this procedure:

  • Open Workgroup and navigate to the Documents screen.
  • In the menu bar, go to GAP ANALYSIS, then click on New… .
  • In the Gap Analysis window, select the profiles you want to compare.
  • Select which Gap Analysis Filter you want to use for comparison.
  • Click Next.

Both profiles are loaded and you are taken to the Gap Analysis Workbench. You can now refine your comparison criteria. By default, data elements are not selected. Select the data elements (data structure and/or code sets) you want to compare. See Refine Gap Analysis Criteria for more information.

Compare a Profile with HL7 Messages

To list the differences (including differences in data structure and data content) between a profile and a set of HL7 messages (probably a few thousand), follow this procedure:

  • Open Workgroup and navigate to the Document screen.
  • In the menu bar, go to GAP ANALYSIS, then click on New… .
  • In the Gap Analysis window:
    • Select the profile you want to compare in the References section.
    • Click Compared HL7 logs tab.
    • In the Compared HL7 logs tab, click the Add… button to add message files.
    • Select one or more files and click Open.
    • Optional: Check Use Large File mode  when loading files above 10MB in size. (This option is selected automatically if file size reaches 25 MB. It will also deactivate the Sort, Replace and Edit Message features.)
  • Select which Gap Analysis Filter you want to use for comparison.
  • Click Next.

The profile and the HL7 messages are loaded and messages are analyzed. Depending on the number of messages you provided, the message analysis might take several minutes. A progress window tracks the process.

Once the loading process is complete, you are taken to the Gap Analysis Workbench. You can now refine your comparison criteria. By default, no data element is selected.

  • Select the data elements (data structure and/or code sets) you want to compare.
  • View/Filter Messages helps you refine the gap on specific messages.

Gap Analysis Filter

Gap Analysis Filter

Gap Analysis Filters are used to remove irrelevant gaps. Each filter contains a set of preset options which will optimize the Gap Analysis detection process in order to show you only the “dangerous” gaps. A Gap Analysis Filter contains:

 When you’ll start a new Gap Analysis, after selecting the profiles to compare, you will be asked to select a Gap Analysis Filter.

Conformance_GapAnalysisFilter_v3

Pre-defined Filters

There are 4 pre-defined Filters that can be used.

Bidirectional

This filter should be used when both systems exchange messages between each other.

A To B

This filter should be used when the first system sends messages to the second system.

A From B

This Filter should be used when the first system receives messages coming from the second system.

Compare two product versions

This filter should be used when you want to compare profiles representing the same system. Ex: Comparing reverse-engineered profiles coming from sample messages of your development and production environments.

Custom Filter

While working with the Gap Analysis Workbench, you can edit computed attributes, options and difference filters. These can then be saved as a  Custom Filter, which can be re-used for other Gap Analysis.

In the Gap Analysis Filter Selection window, you’ll be able to Select a “recent Gap Analysis Filter”, or Load a previously saved filter from your Library.

You’ll be able to set your choice of the default filter for your subsequent Gap Analysis and will not be asked to select a Gap Analysis filter again. Whenever you want, you may apply another Gap Analysis Filter in the Gap Analysis Workbench with “File > Gap Analysis Filter > Change Filter…

Gap Analysis Workbench

Gap Analysis Workbench

Here is a quick look of the Gap Analysis Workbench.

GapAnalysis_GapAnalysisWorkbench_v3

1- Structure/Data Element: In this section, you’ll choose which element from your profiles will be compared.

2- Attributes: In this section, you’ll choose which attributes, from the previously selected elements, will be compared.

3 – Options: In this menu, you’ll be able to set options to improve the accuracy of the Gap Analysis comparison process.

4- Differences Filters: Differences Filters are used to show differences that match some specific criteria. In other words, discard the differences that aren’t relevant to your analysis.

5 – Gap Analysis Results: In this section, you will see all differences between the selected elements of your profiles, based on your Gap Analysis filter (Attribute, Options, Differences Filters).

Using Gap Analysis Results

Gaps Serve As a To-Do List

Gap Analysis in Workgroup helps you focus on identifying and scoping differences upfront, instead spending time downstream on the validation of an overly generic interface. The gaps you find are actually a to-do list of items you need to handle when configuring the interface. Each to-do list item will need to be handled in one of several ways:

  • You can manage a gap within your integration engine using filtering, mapping, and/or transformation logic
  • You can adjust the sending system so it sends information compliant with the receiving application or the integration engine
  • You can ignore a gap if it doesn’t impact the system’s capability to handle needed information
  • You can see the definition of each element in both Profiles by right-clicking on a gap and then Go To Definition… (this option is not available when comparing with a log file)
  • You can ignore the case when comparing element values, example: (JOHN SMITH) will be consider equal to (John Smith). Use the OPTIONS menu to change this setting

The to-do list aspect of Gap Analysis serves as starting point for your project task list documentation. Create a document automatically using the Export as Excel document functionality.

View Examples of Gap Occurrences

If a profile is created through reverse-engineering, you can view where the gaps in Optionality (for Segments and Fields) or Length (for Fields) come from by right-clicking on the cell and selecting View Examples… This will display all the messages where the gap occurred for these profiles.

Refine Gap Analysis Criteria

Refine Gap Analysis Criteria: Which Gaps Matter?

By default, when you first see the Gap Analysis Workbench, nothing is selected. When you run a Gap Analysis, you select the data elements that matter to your interface.

The Gap Analysis Workbench is split in 2 sections:

  • The left section contains the criteria (or the list of data element included in the comparison)
  • The right section contains the actual gaps

Criteria Section

Conformance_GapAnalysis_CriteriaSection_v3

At the top of the Criteria Section, you’ll see the list of the messages, segments, fields, and data tables that are contained in the 2 profiles (or profile and messages) you are comparing. Select an element to include it in the Gap Analysis.

*(Steps prior to these examples)

**Choose HL7 v2.6 as the Reference and HL7 v2.1 as the Compared Profile.

Example #1: Get the List of Data Structure Differences in A01 Messages

  1. Navigate to the Criteria Section of the Gap Analysis Workbench
  2. Expand the ADT node
  3. Select the A01 trigger event
    • Click once (the checkbox is checked) to select the trigger event and all child nodes (segments, fields, components and subcomponents).
    • Shift-Click (the checkbox turns blue) to select the trigger event itself but not the child nodes (segments).
  4. Click Apply. The gap section on the right is then populated with gaps within the A01 data structure

 

Example #2: Get Data Content Differences in the Administrative Sex (0001) Tables

  1. In the Criteria Section, click the Data tab. This tab contains all the HL7 tables (code sets).
  2. Expand the “User Defined Tables” node
  3. Select 0001 – Administrative Sex table
  4. Click Apply. The table section is then populated with gaps across the 0001 – Administrative Sex data values

 

Example #3: Refine Field Comparisons

By default, comparisons within Gap Analysis are on all attributes. Depending on your project and/or your context, you might need to focus on a subset of attributes and remove others. You can refine the comparison algorithm to narrow your comparison as follows.

  1. In the Show Gaps Based On section on the bottom left, select the Field tab
  2. If you aren’t concerned about fields that are present in just one of the profiles, uncheck Show missing entries in checkbox corresponding to that profile.
  3. Uncheck any attribute you don’t need to consider for your interface
  4. Click Apply

The comparison is updated using the active attributes. Once in the Gap Analysis Workbench, you can refine the criteria used to evaluate gaps.

Each HL7 message element is described by a set of attributes. This list maps attributes per each message element.

 Trigger Event
Segment
Field
Table
Event
   
Name
  
Sequence
 
  
Optionality 
 
Repetition 
 
Length  
 
Data Type
  
 
Table Id
  
 
Label   
Comments   

 

Refer to the Extra Content and Gap Analysis section for details around extra content and gap analysis.

Advanced Options

Gap Analysis Options

Several options are available in the Gap Analysis window.

Here is a list of basic options:

Hide Unused Columns:If enabled, this option will hide columns referring to non-computed attributes. Example: If you don’t want to compare the length of fields, the column LENGTH in the Field section will be hidden from your gap analysis results
Ignore Case:If enabled, this option will compare strings using a non-sensitive case algorithm.
Use Fuzzy Matching:If enabled, this option will match names, which are similar to each other. Ex: “Admit a patient” and “Admit Patient” will be considered as equivalent.
Use Strict Usage Comparison:If enabled, this option will consider each segment’s/field’s optionality as different. Otherwise, segments/fields that are not “Required” will be considered as “Optional”.

 Here is a list of more complex options that allow you to maximize usage of Gap Analysis:

Extra Content Considerations

Gap Analysis and Extra Content

You can include Extra Content in the Gap Analysis process under the following conditions:

  • Compare 2 profiles, not a profile and a set of messages. No Extra Content is collected during the analysis of HL7 messages.
  • Both profiles must use the same Extra Content template.

Once these two conditions are met, the Extra Content elements are managed the exact same way as the other elements. Gaps in the Extra Content elements will also be displayed.

Save the Gap Analysis Workbench Current State

Saving a Gap Analysis

You may want to save the current state of the gap analysis workbench to continue work later. To do so:

  1. In FILE menu, select Save As…
  2. Choose a name for the file and a location
  3. Click Save

An .cxg file describing the current state of the gap analysis workbench is created. You can then reopen it:

  1. From the Documents view, click on the GAP ANALYSIS menu, select OpenFrom Existing File…
  2. Select your .cxg click Open.

Show All Elements

Gap Display

To change this option:

  1. In the Gap Analysis Workbench, click on the OPTIONS menu item.
  2. Select Gap Display…

Workgroup_GapAnalysis_GapDisplay

Choose Show ONLY… if you want to view the intersection set events and tables that are common to both profiles. Choose Show ALL… to show the union set.

Differences Filters

Differences Filters

Differences Filters are used to show differences that match some specific criteria. In other words, to discard the differences which doesn’t match these criteria.

This can be used, for instance, to show only differences where the Field is Required in the Receiving Application but Optional (or Missing) in the Sending Application.

Add Filters

  1. Click the [empty] filter buttonFilterEmpty within the section for which you want to filter differences. A filter dialog will be shown.
  2. Click the Add… button to add a new filter.
  3. Edit filter settings.
  4. Click the OK button when you’re done editing your filters. The filters will be applied, showing only differences which match your criteria.

 Conformance_GapAnalysis_DifferencesFilters_v3

 If a section contains active filters, the filter button will be shown as a full filter FilterFull

Filter Settings

Basic Settings 

Source: Select the side from which you want to perform a filter.
Column: Select the column from which you want to get the value to be compared.
Is/Is Not: Include/Exclude differences that match the filter.
Operator: Select the operator that you want the criteria and the column’s value to match.
Criteria: Enter the criteria that you want to compare with the column’s value.

Advanced Settings

Checkbox: Activate or deactivate filter (toggle on or off).
And/Or: 

AND: applies both these filters.

OR: applies either of these filters.

Parentheses: Used for nested filters.

 

Operators

=Covers values with an exact match to this data (this is like putting quotation marks around a search engine query)
>Greater than. Covers filtering on numeric values.
>=Greater than or equal to. Covers filtering on numeric values.
<Less than. Covers filtering on numeric values.
<=Less than or equal to. Covers filtering on numeric values.
containingCovers messages that include this value. 
presentLooks for the presence of a particular column.
emptyLooks for an unpopulated column.
matching regexUse .NET regular expression syntax to build filters. For advanced users with programming backgrounds. Learn more about regular expressions here: 

 

inBuilds a filter on multiple data values rather than just one value.
= Other Specification ValueExact match to the other profile’s column value.
> Other Specification ValueGreater than the other profile’s column value. Covers filtering on numeric values.
>= Other Specification ValueGreater than or equal to the other profile’s column value. Covers filtering on numeric values.
< Other Specification ValueLess than the other profile’s column value. Covers filtering on numeric values.
<= Other Specification ValueLess than or equal to the other profile’s column value. Covers filtering on numeric values.


Basic/Advanced Mode

While editing your filters, you can switch between Basic and Advanced Mode. Advanced Mode shows advanced settings for your filters. These settings help in the construction of more complex filters using AND/OR operators and parentheses for nesting. Otherwise, each filter will be applied one after the other.

If your filters contain advanced settings and you switch back to the Basic Mode, these settings will be lost.

Differences Filter Template

Differences Filters Template are re-usable filters that can be applied to many Gap Analysis. A built-in template can be selected from the drop-down list at the top-left of the filters dialog.

Hide a Difference from the Gap Analysis Result section

You can hide a difference (Gap Analysis Result row) automatically. To do so, right-click the row you want to hide, then click “Hide [row key] difference”. This adds a new difference filter entry and hide the selected row.

Export Gaps as Excel Document

Gap Analysis in Excel

Gaps serve as a to-do list of items you need to handle when configuring the interface. The list of gaps serves as a starting point for project task list documentation. To export gaps as a Excel document:

    1. In the Gap Analysis Workbench, select FILEExport as Excel Document…

Microsoft Excel (or the program associated with .xlsx documents) will automatically start.

Message Comparison

Use Cases

Message comparison helps you compare 2 sets of messages at the data level.  This is useful in several cases, such as:

  • In a conversion project, you want to identify the transformations performed on messages without looking at the transformation code.  In this case, you compare input and output messages and look for changed messages.  The highlighted differences will show you what transformations were performed.  It’s a quick and easy way to gather requirements.
  • During the validation phase, you compare transformed messages with another set of messages you already know are valid (golden message set).  The highlighted differences will indicate any issues in your code or any missing transformations.  This is a quick and easy way to validate that your code fulfills the requirements.

Comparing Messages

To compare a set of HL7 messages:

  1. Go to GAP ANALYSIS, Message Comparison…
  2. Click the Select messages to compare… zone
  3. Add the messages you want to compare.  Messages can come from:
      File:  Click Add… to add one or several files containing messages
      Database:  Select a database to query and from which to retrieve messages.
      Integration Engine:  Select an integration engine data depot (Ensemble, Rhapsody, Iguana, Mirth and others) to retrieve messages directly from the integration engine (connector required).
  4. Do the same for the other message set, clicking the other Select messages to compare… zone on the right.

Once the comparison is complete, differences are highlighted in red and the total number of differences between messages is displayed. 

1-on-1 Comparison

For a more detailed view of a message pair or message differences, double-click the message pair you want to compare.  Navigate through the tree view, field by field, to see the differences.

Click on the gray zone at the bottom of the screen to view more details about each difference.  Double-clicking on a grid row helps you navigate through the differences.

Matching Fields

By default, messages will be compared based on their position.  The first message on the left is compared with the first message on the right, the second with the second and so on. 

Since message files don’t always contain the same amount of messages and/or messages are not necessarily always sorted in the same order, you can configure the application to match messages based on field values.  To configure the message matching criteria:

  1. Move your mouse pointer over the field you want to use for message matching
  2. Right-click and select Match Messages Using this Field

Alternatively, you can:

  1. Go to TOOLS, Options
  2. Select the Message Comparisons tab
  3. Select Field values. Add a list of fields used to match messages
  4. Click Add…
  5. Change the new line that appears to the field needed to match messages
  6. Repeat step #4 and #5 with the next field if you want to use more than one matching field

Include/Exclude Fields from comparison

You may want to exclude fields from the comparison so they are simply not considered in the comparison.  This allows you to ignore differences in fields you don’t need to consider.

To exclude fields from comparison:

  1. Move your mouse pointer over the field you want to exclude
  2. Right-click and select Add to Exclude Filters

 Alternatively, you can:

  1. Go to TOOLS, Options
  2. Select the Filters tab (the Filters tab is also accessible from the Message Comparison screen clicking on the FilterEmptyicon on the upper-right corner)
  3. Make sure Exclude is selected
  4. Click Add…
  5. Change the new line that appears to the field to be excluded
  6. Repeat step #4 and #5 to exclude more fields

It can be easier to provide a list of fields to include instead of excluding a large number of fields.  The procedure is similar.  In the Filter  tab, be sure Include (instead of Exclude) is selected.

To set a large number of fields in one operation,  use the 1-on-1 message comparison screen.  For example, if you want to compare fields PID.2 to PID.13:

  1. Go to the 1-on-1 message comparison by double-clicking on a message pair
  2. Expand the PID segment so you can view all fields
  3. Select PID.2 to PID.13 holding down the SHIFT key
  4. Right-click on the selection zone and select Switch to Include Filter and Set Only This Field
  5. Close the window

The comparison will refresh using the new field set.

Hide/Show what matters

After the comparison is completed, message pairs can have one of the following statuses:

  • Changed:  Matching message found and one or more differences were found
  • Unmatched:  No matching message found
  • Identical:  Matching message found and no differences was found

On the bottom left of the screen, the  message pair count for each status is listed. 

Message pairs can be shown/hidden based on their status.  For instance, to hide identical messages:

  1. On the bottom left of the screen, select the identical message status
  2. Select Hide identical messages

Identical messages are filtered so only changed and unmatched messages are listed. 

Difference Report

An Excel or PDF report can be generated to document the status of all messages.  This report can be used, for instance, to document that the transformation code met all requirements at some point in time.

 To generate this report:

  1. Go to FILE, Create Report
    1. To Excel…
    2. To PDF…

 The report contains:

  • Timestamp comparison
  • Files compared
  • Number of messages for each status
  • Message matching details
  • Field filters details
  • Differences

Settings

Automatically apply changesIf checked, the differences will be calculated each time a significant setting has changed. 
Treat missing and empty fields as equivalent

If checked, the algorithm will consider missing and empty fields as equivalent.

Ex:

‘OBX||AD|||||’ and ‘OBX||AD’ will not be flagged as different.

‘PID|||||Smith^John^’ and ‘PID|||||Smith^John’ will not be flagged as different.

HL7 Messaging

Caristix Workgroup comes with several features that help you with HL7 messaging:

De-identify Messages

Introducing Message De-Identification with Caristix Workgroup Software

Caristix Workgroup helps interface analysts and engineers to accurately de-identify HL7 data, covering all 18 HIPAA identifiers. Data can then be safely shared for such purposes as porting realistic data to a test system or staging area, providing realistic sample HL7 messsages for interface scoping, and providing data for clinical and financial analytics.

The following features and functionality are included:

  • Maps identifiers to fields and segments
  • Maintains useful date preserving the patient’s overall record
  • Traces impacted message components
  • Produces a de-ideintification process report
  • Prevents re-identification

De-identification Concepts

Protecting Patient Data

One of the most important issues in healthcare IT is the protection of patient data. Regulation addresses patient privacy and the use of health information in many countries. In the US, HIPAA regulates the use of PHI (protected health information).

While protecting patient data, HL7 analysts need to share or redistribute HL7 production data for such purposes as porting realistic data to a test system or staging area, providing realistic sample HL7 messsages for interface scoping, and providing data for clinical and financial analytics.

The Department of Health and Human Services (HHS) provides a HIPAA Privacy Rule booklet (PDF) that highlights the 18 criteria that can be used to identify patients. All 18 identifiers are categories of data that must be protected. Besides easily recognized personal information, care must be given to protect device identifiers and even IP addresses. De-identification techniques must cover all 18 identifiers.

Definitions

De-identification or Anonymization

This term refers to removing or masking protected information. The de-identification removes identifiers from a data set so that information can no longer be linked to a specific individual. In terms of health care information, all identifiers are removed from the information set including both personally identifiable information (PII) and protected health information (PHI).

Pseudonymization

As a subset of de-identification, pseudonymization replaces data elements with new identifiers. After that substitution, the initial subject cannot be associated with the data set. In terms of health care information, patient information can be pseudonymized by replacing patient-identifying data with completely unrelated data resulting in a new patient profile. The data appears complete and the data context is preserved while patient information is completely protected

Re-identification

A pseudonymized data set can be restored to its original state through re-identification. In re-identifying data, a reverse mapping structure (constructed as the data was pseudonymized) is applied. As an example, a pseudonmymized data set could be sent for processing to an external system. Once that processed information is returned, the data could be re-identified and pushed to the correct patient file.

Identifiers

Identifiers are data elements that can directly identify individuals.This includes name, email address, telephone address, home address, social security number, medical card number, among others. Two identifiers may be needed to identify a unique individual.

Quasi-identifiers

Data elements of this type do not directly identify an individual but may provide enough information to narrow the potential of identifying a specific individual. Genders, date of birth and zip/postal code have been studied extensively in this context. There is a dependent relationship between quasi-identifiers and the type of data set of which they are a part. As an example, if all members of a data set are male, gender cannot be a meaningful quasi-identifier. In addition, quasi-identifiers are categorical in nature with a finite set of discrete values. It’s relatively easy to search for individuals using quasi-identifiers.

Non-identifiers

Non-identifiers may contain an individual’s personal information but aren’t helpful in reconstructing the initial information. For example, an indicator of an allergy to pollen would be a non-identifying data element. The incidence of such an allergy is extremely high in the general population. Therefore this factor is not a good discriminator among individuals. Again, non-identifiers are dependent on data sets. In the right context, they may be used to identify an individual.

De-identifying HL7 Messages

Overview

De-identification in Workgroup works as follows:

De_identifying_hl7_messages

Loading HL7 Messages

Load the HL7 message that requires de-identification:

  • On the menu bar, go to FileOpen, Messages…
  • In the HL7 Log dialog box that opens, click the plus (+) sign.
  • Navigate to your log file and choose the file to be opened. Add more files by clicking the plus (+) sign.
  • Click Open.The file name will appear highlighted in the HL7 Log dialog box.
  • Click Next to continue.

The log is loaded in the Messages tab. The tab also indicates the number of messages in the viewing pane and the total number of messages in the file you loaded. The Original pane displays the log you loaded while the De-identified pane displays the de-identified log. The split screens scroll synchronously so that the data displayed is mirrored in the original and de-identified logs.

Resize vertically to change the quantity of data displayed in the viewing pane. Place the pointer on the line dividing the two panes and drag the window to increase or decrease its size. Click the Hide and Show buttons to hide or view panes as needed.

The fields and data types set for de-identification are highlighted in red for easy visibility.

De-identification Settings

On the left side of the screen are the de-identification settings listed under the Fields and Data Types tabs. Workgroup loads settings to cover the 18 HIPAA identifiers by default.

Fields Tab

  • Checkbox: the checkmark indicates an active rule. Uncheck to deactivate a rule.
  • SEGMENT: select a segment.
  • FIELD: select a field.
  • COMPONENT: select a component, if needed.
  • SUBCOMPONENT: select a subcomponent, if needed.
  • ID: sets the primary key.  Check it for any field uniquely identifying the patient.  For instance, if patient is identified using PID.3.1 and PID.3.4, make sure there is a rule for each of those 2 fields and check both as being ID.  In the same way, if you want to use patient name, gender and date of birth as patient identifier, make sure the ID check box is checked for all fields.  Unchecking it would change how data is generated (a new patient would be created for each message).

Data Types Tab

  • Checkbox: the checkmark indicates an active rule. Uncheck to deactivate.
  • DATATYPE: select a datatype.
  • COMPONENT: select a component.

Value Generator Tab

Add or Remove Rules

To add a de-identification rule under Fields or Data Types:

  • Click the plus sign at the bottom of the list of selectors. A new line will appear.
  • Edit using  the dropdowns in each column.

To remove a setting, click the trashcan at the end of the line.

View Example and Save a De-identified HL7 Log File

Once you have created and configured all the selectors applicable to the HL7 log to be de-identified, click View Example at the bottom of the left hand panes. A preview of the de-identified log file will appear. Scroll through the log in the viewing pane to verify the potential results of the de-identification process.

Once reviewed and after applying any changes:

  • Click De-identify at the bottom of the left hand screen to the right of View Example to save the de-identified log file.
  • Save Results dialogue box will appear with a number of options. Click the appropriate radio button for that log file. Those options include:
    • Save the file with the initial file structure
    • Divide the file into smaller chunks of a specific size in MB or number of messages
    • Not to split the file but to keep it all in one file
  • Click Save and browse to the location to store the file in your Library and click OK. You can also save the file on your local computer by using Browse My Computer.
  • A window will open tracking the progress of the process.

Results Summary

Once saved, a De-identification Process Report dialogue box will open asking if you wish to create a de-identification process report. Click Yes or No. If Yes is clicked, you will be prompted to choose a location to save the generated PDF and to give a name to the file. Click Save and the file will be saved to the specified location. The PDF of the De-identification Process Summary will open on your desktop for review. You can also save the file on your local computer by using Browse My Computer.

Once a set of selectors have been chosen for the de-identification of a log file, that set can be saved for reuse.

  • In the drop down menu under File in the upper menu bar, Click Save De-Id Rules.
  • Choose a location and fill in a file name for the settings and click Save.

Once a log file has been opened, the saved de-identification rules can be applied by clicking Open, De-Id Rules from the drop down menu bar under File in the the top menu bar.

Generators

Generators refer to the data sources used to set de-identification values in Workgroup.

Generator
Recommended Use
StringInsert a randomly generated string or static value. You can set the length and other parameters.
BooleanInsert a Boolean value (true or false).
NumericInsert a randomly generated number. You can set the length, decimals and other parameters.
Date TimeInsert a randomly generated date-time value. You can set the range, time unit, format, and other parameters.
TablePull data from HL7-related tables stored in one of your profiles, useful for coded fields.
SQL QueryPull data from a database based on an SQL query. You’ll be able to configure a database connection.
TextPull random de-identification data from a text file — for instance, a list of names. Several file formats can be used: txt, csv, etc
ExcelPull random de-identification data from an Excel 2007 or later spreadsheet — for instance, a list of names, addresses, and cities.
Use Original ValueKeep the field as-is. No de-identification rules will be applied.
Copy Another FieldCopy the contents of another field.
Unstructured DataFind and replace sensitive data in free text fields — for instance, find and replace a patient’s last name in physician notes.

Generator Settings

Each generator has its own settings, which you can edit from the Value Generator tab. Click on the generator name to navigate to the setting details.

Advanced Mode

Allows you to use more than one generator for a single field, edit the output format or preformat values. You can also set preconditions to conditionally apply the de-identification rule.

Preformat Value

(Only available in Advanced Mode)

 Use this to format the original value before it is processed.

This is useful for generators that include the original value or ID fields. Here are two usage examples:

a) In an unstructured data field,  you may wish to remove a value that is not contained elsewhere (not already cloaked in another field):

If you know the field may contain a reference to an ID defined as ‘ID-999999’, you would:

1.  Cloak the field using an Unstructured Data generator.

2. Set the following preformat for the unstructured data:

Find what:
ID-\d+

(Search for a text, anywhere in the field value, starting with ‘ID-‘ and followed by one or more numbers.)

Replace by:
ID-XXXX

(We set a static text to hide the ID but still keep the context of the text.)

b) If you have the same patient ID number in two systems, but formatted differently, you could  format them so that both systems change to the same ID format and  can both be recognized as the same patient. Having the same ID will provide continuity of the message flow for a patient (messages will be cloaked using the same fake data):

If, for example, PID.2 is defined like this for the two systems:

First system: ID:123456
Second system: 123-456

 You would need to:

a) Set the field PID.2 as an ID (by checking the ID column).

b) Define two preformats like this:

Find what:
^ID-(?<ID_Number>\d+)$

(We find an exact match for the format and set the numbers only in a group variable named ‘ID_Number’)

Replace by:
${ID_Number}

(We set only the number, removing the superfluous text)

   
Find what:
^(?<ID_Number_Part_1>\d+)-(?<ID_Number_Part_2>\d+)$
(Find an exact match for the format and set the numbers only in a group variable named ‘ID_Number’)
Replace by:
${ID_Number_Part_1}${ID_Number_Part_2}
(Only the number, remove the superfluous text)

Now both systems will treat PID.2 as being ‘123456’ and match and cloak the messages properly as being the same patient.

String

This generator creates a uppercase character string to be used to set a static value.

How to use the “String” generator to create random value:

  • Check the Random option.
  • Set the minimum length of the strings you want to generate.The minimum value for this configuration is 0. A string with a length of 0 is equivalent to an empty string.
  • Set the maximum length of the strings you want to generate.
  • Include lowercase letters (a to z characters).
  • Include uppercase letters (Z to Z characters).
  • Include digits (0 to 9 characters).
  • Include special characters. This allows you to include any character you want.
  • Include random blanks. Including random blanks means that you generate empty strings among the values for use in the field or data type.

How to use the “String” generator to set a static value:

  • Check the Static option.
  • Set the static value to be inserted.

How to use the “String” generator to set a Lorem Ipsum text:

  • Check the Lorem Ipsum option.
  • Set the minimum length of the strings you want to generate.The minimum value for this configuration is 0. A string with a length of 0 is equivalent to an empty string.
  • Set the maximum length of the strings you want to generate.
  • Include random blanks. Including random blanks means that you generate empty strings among the values for use in the field or data type.
Example #1:Generated Values
  • Random
  • Minimum length: 0
  • Maximum length: 5
  • Include random blanks: checked
XDZ
VOJHZ
 
BFAR
Example #2:Generated Values
  • Static
  • Static Value: MyNewValue
MyNewValue

Boolean

This generator creates a Boolean (True or False) value.

How to use the Boolean generator:

  • Random values
    • Generate True or False value randomly.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of True, False, True, False, True, etc.
    • Start new list. Always start the sequence with True.
    • Continue from previous list. If you run the De-Identication and it ends with True, next time, it will start with False.
Example #1:          Generated Values
  • Random values   
  • Include random blanks: unchecked
         True
          True
          False
          True
          False

Numeric

This generator creates a number.

How to use the “Numeric” generator:

  • Random values
    • Randomly generate values between minimum and maximum limits.
    • Decimal. Set the decimal precision of the generated value. Example #2 will generate value with 2 decimals (3.75).
    • Include random blanks. Including random blanks means that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of 0, 1, 2, 3, etc.
    • Decimal. Set the decimal precision of the generated value. Example #2 will generate value with 2 decimals (3.75).
    • Increment by. The step to use between each generation. You can use a negative value.
    • Start new list. Always start with the minimum limit or the maximum limit if you’re using a negative increment.
    • Continue from previous list. If you run De-Identication and it ends with 13, the next time, it will start with 14.
Example #1:Generated Values
  • Sequential list
  • Between: 10 and 1000
  • Decimal: 2
  • Increment by: 5
  • Start new list
10.34
15.2
20.85
25.39
30.12
Example #2:Generated Values
  • Random values
  • Between: 10 and 1000
  • Decimal: 0
  • Include random blanks: unchecked
353
942
359
626
967

Date Time

This generator creates date and time values.

How to use the “Date time” generator:

  • Random values
    • Randomly generate values in a range between minimum and maximum limits of time unit (second, minute, etc.)
    • Based on.
      • Now. Will use the current date time as a reference.
      • Actual field value. Will use the date time value from the field in the original message.
      • A specific date. You can specify a date and time to use as a reference.
    • Date format. Set the format of the new date-time. Note that you have a choice of formats. You can also enter your own format manually.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of date-time like: 2013-12-12, 2013-12-13, 2013-12-14, 2013-12-15, etc.
    • Based on.
      • Now. Will use the current date time as a reference.
      • Actual field value. Will use the date time value from the field in the original message.
      • A specific date. You can specify a date and time to use as a reference.
    • Date format. Set the format of the new date-time. Note that you have a choice of formats. You can also enter your own format manually.
    • Increment by. The step to use between each generation. You can use a negative value and set a time unit (second, minute, etc.)
    • Start new line. Always start with the minimum limit or the maximum limit if you’re using a negative increment.
    • Continue from previous list. If you run the De-Identication and it ends with 2013-12-13, next time, it will start with 2013-12-14.
Example #1:Generated ValuesDescription
  • Random values
  • In a range between: 10 and 1000 Day
  • Based on: A specific date (2012-01-01 00:00:00)
  • Date format: yyyyMMdd
  • Include random blanks: unchecked
20120318Reference date + 77 days
20120614Reference date + 165 days
20140102Reference date + 732 days
20120212Reference date + 42 days
20130508Reference date + 493 days
Example #2:Generated ValuesDescription
  • Sequential list
  • In a range between: 0 and 1440 Minute
  • Based on: A specific date (2012-01-01 09:15:30)
  • Date format: yyyyMMddHHmmss
  • Increment by: 15 minutes
20120101091530Reference date + 0 minutes
20120101093030Reference date + 15 minutes
20120101094530Reference date + 30 minutes
20120101100030Reference date + 45 minutes
20120101101530Reference date + 60 minutes
Example #3:Generated ValuesDescription
  • Sequential list
  • In a range between: 0 and 30 Minute
  • Based on: A specific date (2012-01-01 09:15:30)
  • Date format: yyyyMMddHHmmss
  • Increment by: 10 minutes
20120101091530Reference date + 0 minutes
20120101092530Reference date + 10 minutes
20120101093530Reference date + 20 minutes
20120101094530Reference date + 30 minutes
20120101091530Reference date + 0 minutes

When the generator exceeds the maximum value (30), the sequence is reset starting at the minimum value (0).

Example #4: Manipulate date of birthOriginal field ValueGenerated Value
  • Random values
  • In a range between: -3650 and 3650 Day
  • Based on: Actual field value
  • Date format: yyyyMMdd
  • Include random blanks: unchecked
1913011319110213
1990090920000812
1985090919870514
1960102019650218
1980031719880617

HL7 table

This generator pulls data from HL7-related tables stored in a profile. Read how to set the profile.

How to configure the generator to use the appropriate HL7 table:

  • Random values
    • Randomly generate values from an HL7 table.
    • Source. Select the profile containing the table.
    • Table. Select a table from which the value will be generated.
    • To access the table content, click on the Edit Table button. If you change the table content, the new table content will appear in the profile you select.
    • Restrict to values between. Will only use table entries that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of value starting with the first table entry.
    • Source. Select the profile containing the table.
    • Table. Select a table from which the value will be generated.
    • To access the table content, click on the Edit Table button. If you change the table content, the new table content will appear in the profile you select.
    • Restrict to values between. Will only use table entries that are within the specified limits.
    • Start new line. Always start with the first entry of the table.
    • Continue from previous list. If you run the De-Identication and it ends with the 13th entry, next time, it will start with the 14th one.
Example #1:Generated Values
  • Random values
  • Table: 0001 – Administrative Sex
  • Restrict to values between: 1 and 1
    characters
  • Include random blanks: unchecked
N
M
A
F
A
Example #2:Generated Values
  • Sequential list
  • Table: 0001 – Administrative Sex
  • Start new list
A
F
M
N
O

SQL Query

This generator pulls data from an SQL-accessible database.

How to configure this generator to use SQL query results as de-identified values:

  • Select a database connection. If no database connections are configured, click Connections… to set up a connection.
  • Enter the SQL query. You can use the embedded Query Builder to help you build the query.
  • Restrict to values between. Will only use values that are within the specified limits.
  • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
Example #1:Generated Values
  • Connection: Connection1
  • Query: SELECT name FROM employees
  • Restrict to values between: 1 and 20 characters
  • Include random blanks: Unchecked
John Smith
Jane Doe
Road Runner
The Coyote
Tweety Bird

Text File

This generator pulls data from a text file (*.txt, *.csv, etc).

How to configure this generator to use text file content:

  • Random values
    • Randomly generate values from a text file.
    • File. Specify the source of the text file. Use the Browse… button to select a file.
    • Column. Specify the column id to use (in case of a character delimited file, ex: *.csv)
    • Column delimiter. The character that separate each column in the text file.
    • First/Last rows. Specify the rows to get data.
    • Between character position. Will only use characters that are within the specified positions.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of value from a text file starting with the first row.
    • File. Specify the source of the text file. Use the Browse… button to select a file.
    • Column. Specify the column id to use (in case of a character delimited file, ex: *.csv)
    • Column delimiter. The character that separate each column in the text file.
    • First/Last rows. Specify the rows to get data.
    • Between character position. Will only use characters that are within the specified positions.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Start new line. Always start with the first row in the text file.
    • Continue from previous list. If you run the De-Identication and it ends with the 13th entry, next time, it will start with the 14th one.

Note: If more than one field is configured using the same text file, the same line will be used within the same message. In other words, you can use a text file to ensure several values will be used together. This can be useful when linking a a city with a zip code or a first name with a gender.

The examples below use the following content in a file C:MyDocumentsmyFile.txt

1,Road Runner,M,ACME,Anycity,12345
2,The Coyote,M,ACME,Anycity,12345
3,Sylvester The Cat,M,ACME,Anycity,12345
4,Tweety Bird,M,ACME,Anycity,12345
5,John Smith,M,,Anothercity,98765
6,Jane Doe,F,,Anothercity,98765
Example #1:Generated Values
  • Random values
  • File: C:MyDocumentsmyFile.txt
  • Column: 2
  • Delimiter: ,
  • Restrict to values between: 1 and 20 characters
  • Include random blanks: Unchecked
John Smith
Jane Doe
Road Runner
The Coyote
Tweety Bird
Example #2:Generated Values
  • Sequential list
  • File: C:MyDocumentsmyFile.txt
  • Column: 3
  • Delimiter: ,
  • Restrict to values between: 1 and 20 characters
  • Start new list
M
M
M
M
M
F

Excel file

This generator pulls data from an Excel 2007+ file (*.xlsx).

How to configure the generator to use Excel file content:

  • Random values
    • Randomly generate values from an Excel file.
    • File. Specify the source of the Excel file. Use the Browse… button to select a file.
    • Worksheet. Specify the Worksheet to use.
    • Column. Specify the column to use.
    • First/Last rows. Specify the rows to get data.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of value from an Excel file starting with the first row.
    • File. Specify the source of the Excel file. Use the Browse… button to select a file.
    • Worksheet. Specify the Worksheet to use.
    • Column. Specify the column to use.
    • First/Last rows. Specify the rows to get data.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Start new line. Always start with the first row in the Excel file.
    • Continue from previous list. If you run the De-Identication and it ends with the 13th entry, next time, it will start with the 14th one.

Note: If more than one field is configured using the same worksheet, the same row will be applied across a message. In other words, you can use an Excel file to ensure that several values will be used together. This can be useful when link a city with a zip code or a first name with a gender.

The examples below use the following content from a file named C:MyDocumentsmyExcelFile.xlsx

1Road RunnerMACMEAnycity12345
2The CoyoteMACMEAnycity12345
3Sylvester The CatMACMEAnycity12345
4Tweety BirdMACMEAnycity12345
5Jane DoeF Anothercity98765
6John SmithM Anothercity98765
Example #1:Generated Values
  • Random values
  • File: C:MyDocumentsmyExcelFile.xlsx
  • Worksheet: TheFirstSheet
  • Column: 2
  • Restrict to values between: 1 and 20 characters
  • Include random blanks: Unchecked
John Smith
Jane Doe
Road Runner
The Coyote
Tweety Bird
Example #2:Generated Values
  • Sequential list
  • File: C:MyDocumentsmyExcelFile.xlsx
  • Worksheet: TheFirstSheet
  • Column: 3
  • Restrict to values between: 1 and 20 characters
  • Start new list
M
M
M
M
F
M

Use original value

This generator is to be used when you don’t want a data element to be changed. Here
are two use case examples.

Use Case #1

Use Case #2

  • De-identify all fields with XPN data type except for the attending doctor
MSH|^~&|SYSTEM-A|1 |||20100404210829||ADT^A01|20100404000000645509|P|2.3|||||CA|ASCII
EVN|A01|201004042108||129|Interface^HL7 Interface|201004042106
PID|0001|ID53572812^^^|0126271^^^^^1||SMITH^JOHN||195307280000|M|SMITH^JOHN||1 FIFTH
AVENUE^NEW YORK^NEW YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
PV1|0001|I|2C^2322^2322-0^1^^^^^3|1|50386||1083278^MCFEE,MIKE^^^^^||||||||||1083278^MCFEE,
MIKE|1|50386|1||||||||||||||||||||||||201004042106||||||||
PV2||||||||||||||||||||||N

If the data type Extended Person Name (XPN) is part of the list of data
types to de-identify, you might need to preserve some of the fields using this data
type.

Data TypeComponentGenerator
XPN2 – Given NameExcel File
FN1 – SurnameExcel File
SegmentFieldComponentSubcomponentIDGenerator
PV17 – Attending Doctor   Use Original Value

Using this configuration, you would make sure all names are de-identified except
the attending doctor’s name.

MSH|^~&|SYSTEM-A|1 |||20100404210829||ADT^A01|20100404000000645509|P|2.3|||||CA|ASCII
EVN|A01|201004042108||129|Interface^Johnson|201004042106
PID|0001|ID53572812^^^|0126271^^^^^1||Johnson^Deborah||195307280000|M|<span
 style=”color: #ff0000;”>Johnson^Deborah</span
||1 FIFTH AVENUE^NEW YORK^NEW
YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
PV1|0001|I|2C^2322^2322-0^1^^^^^3|1|50386||1083278^MCFEE,
MIKE^^^^^||||||||||1083278^Johnson|1|50386|1||||||||||||||||||||||||201004042106||||||||
PV2||||||||||||||||||||||N

Use Case #3

  • Prevent de-identifying a field that is defined as a ID

    Field IDs must have a generator associated with them but, if for some reason you prefer having the original value, you can set this to avoid any changes in that value.

Use Case #4

  • Re-use the original data and combine it with other generators

    In Advanced Mode, you can de-identify the original value by specifying several generators, but you could also include the original value to combine it with other generated values.

Copy Another Field

This generator replicates the value from another de-identified field.

How to use the “Copy Another Field” generator:

  • Add a new de-identification rule by right-clicking the field to de-identify.
  • Select the Copy Another Field generator.
  • Set the Segment, Field, Component and Sub-Component of the source field.
  • The source field can be any other field present in the message.

Example 1: copy the replacement MRN value from PID. 2 to ZCA.3

Generator - Copy Another Field

Unstructured Data

Sensitive data can be found in unstructured data (free text) such as clinician notes or other narrative text. Most of the data within an unstructured field is not sensitive, but there are times when it might contain data elements you want to protect.

This generator will replace any piece of information found in another message field that is set for de-identification.

Example #1

In the following message, the name of the patient is mentioned in the patient update note (NTE.3).

MSH|^~&|SYSTEM-A|1|||20100404210829||ADT^A08|20100404000000645509|P|2.3|||||CA|ASCII
PID|0001|ID53572812^^^|0126271^^^^^1||SMITH^JOHN||195307280000|M|SMITH^JOHN||1 FIFTH AVENUE^NEW YORK^NEW YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
NTE|||Mr. Smith provided new phone numbers

If the patient name (PID.5.1 field) is listed among the de-identification rules, you can configure a new field to detect the patient name within NTE.3

SegmentFieldComponentSubcomponentIDGenerator
PID5 – Patient Name1 – Family Name  Excel File
NTE3 – Comment   Unstructured Data

Using these settings, the de-identified message will look like this:

MSH|^~&|SYSTEM-A|1|||20100404210829||ADT^A08|20100404000000645509|P|2.3|||||CA|ASCII
PID|0001|ID53572812^^^|0126271^^^^^1||Doe^JOHN||195307280000|M|SMITH^JOHN||1 FIFTH AVENUE^NEW YORK^NEW YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
NTE|||Mr Doe provided new phone numbers

Example #2

MSH|^~&|SYSTEM-A|1|||20100404210829||ADT^A08|20100404000000645509|P|2.3|||||CA|ASCII
PID|0001|ID53572812^^^|0126271^^^^^1||SMITH^JOHN||195307280000|M|SMITH^JOHN||1 FIFTH AVENUE^NEW YORK^NEW YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
NTE|||Mr Smith ( ID53572812 ) provided new phone numbers
NTE|||Mr Smith also provided a new address

If the patient name (PID.2 field) is listed among the de-identification rules, you can configure a new field to detect the patient ID within NTE.3

SegmentFieldComponentSubcomponentIDGenerator
PID2 – Patient ID   Numeric
PID5 – Patient Name1 – Family Name  Excel File
NTE3 – Comment   Unstructured Data

Using these settings, the de-identified message will look like this:

MSH|^~&|SYSTEM-A|1|||20100404210829||ADT^A08|20100404000000645509|P|2.3|||||CA|ASCII
PID|0001|123459876^^^|0126271^^^^^1||Doe^JOHN||195307280000|M|SMITH^JOHN||1 FIFTH AVENUE^NEW YORK^NEW YORK^^33333^USA^P^53052^16||(555)555-5555|(555)555-5555|^|2|||238898464|||||||||C1||N
NTE|||Mr Doe (123459876) provided new phone numbers
NTE|||Mr Doe also provided a new address

De-identification Process Report

PDF Report

At the end of the de-identification process, Workgroup offers the option of generating a De-identification Process Report that summarizes the de-identification process. This report can be viewed and shared. The PDF opens automatically upon completion. For later review, navigate to the specified folder when the PDF was stored and click on the file to open it.

The De-identification Process Report has two parts:

  • De-identification Process Summary: Protected Health Information
  • De-identification Configuration Summary

The De-identification Process Summary: Protected Health Information

This section of the report lists the following:

  • time of de-identification
  • computer name and IP address that generated the report
  • HL7 reference profile used
  • total number of messages processed for de-identification.

Files sub-section:

  • file structure in which the de-identified file was saved
  • location, names and number of messages of the original initial file and the de-identified file produced by Cloak.

The De-identification Configuration Summary

This section identifies the de-identification file name and location and presents tables of three summaries of the de-identification process:

  • Primary Key/ID: Lists the HL7 field used to persist identification throughout message streams.
    • Information reported includes: Segment, Field, Component and Sub component.
  • Data Types: Identifies data types selected for de-identification.
    • Information reported includes: Data Type, Component, Generator Type and Settings.
  • Fields: Identifies the fields selected for de-identification.
    • Message information reported includes Segment, Field, Component and Sub-component, Generator Type, and Settings.

Setting Options

Options

De-Identification has a number of options that can be set. From the main menu bar, click Tools, then Options. In the Options dialog box that opens, there are three categories: Reference Profile, Windows Service Settings, Delimiters and Settings.

De_identify_Options

Reference Profile

These setting allow the use of HL7 reference profiles to parse logs. Open the Reference Profile tab.

  • Click the checkbox for Use Reference Profile.
  • Under the Profiles tab, click and highlight the HL7 reference file to be used from the list and click OK. This will change the reference file used in Cloak.
  • To load an alternate profile library, click Browse to navigate to the location of the file. Choose the location and file name and click Open. The file path will be referenced when returning to the Reference Profile tab.
  • Click OK to save the settings.

Delimiters

These settings allow the addition of specific delimiters to the log file to assist with manageability and readability. They include:

  • Use message beginning delimiter:
    • Open the Preferences tab.
    • Click the checkbox to select this category.
    • Type the delimiter to be used in the text box.
    • Click the checkbox to choose the location for the delimiter.
      • Beginning of file or Use custom regex.
  • Use message ending delimiter.
    • Click the checkbox to select this category.
    • Enter the delimiter to be used in the text box.
    • Click the checkbox to choose the location for the delimiter.
      • Beginning of file or Use custom regex.
  • Use segment ending delimiter.
    • Click the checkbox to select this category.
    • Type the delimiter to be used in the text box.
    • Click the checkbox to choose the location for the delimiter.
      • End of line or Use custom regex.

Click OK to save the delimiters.

Settings

  • Generate value on empty field
    • This will populate every field assigned by a rule even if the original value is empty or missing.
  • Include leading zeros in numeric identifiers
    • This allows Cloak to ignore leading zero in patient identifiers and consider them as numeric values.
  • Re-apply rules and replacement data across multiple files
    • When unchecked, replacement patient identity and mapping to actual patient are destroyed as soon as the de-identification process ends. This maximizes security, as without this information, data cannot be re-identified in any way.
    • When checked, replacement patient identity and mappings are saved (in the file configured). This file will be reused the next time you de-identify messages, so patient data will be replaced by the same replacement patent identity. In other words, if Joe Smith was replaced by John Doe the first time, checking this option would mean Joe Smith would be replaced again by John Doe, and so on until you you uncheck this option.

Click OK to save the settings.

Create Messages

Message Maker in Caristix Workgroup Software

Use the Message Maker tool to create test messages to PLACE INTO a scenario or to copy to another application. The messages you generate will be based on a specific profile (an HL7 version based on the reference standard, or a profile created earlier).

  1. In the main menu, click HL7 MESSAGINGCreate… . The Message Maker dialog box appears.
  2. In the Conformance Profile dropdown list, select a profile to base the message on.
  3. Expand the tree view on the message type you need.

    Workgroup_MessageMaker

  4. Double-click an event, and the Messages tab automatically populates with a message based on data contained in the Caristix data dictionary.
  5. Navigate the tree view to add as many messages as needed.
  6. To save messages to a .txt file, click FileSave messages...
  7. To close the Message Maker tool, click the OK button at the bottom of the screen.

Edit and Validate

Overview

The Message Editor tool lets you edit content and compare HL7 messages against a profile in order to flag conformance gaps. This is useful when you need to troubleshoot data flow in a live interface that has been documented in Caristix Workgroup.

Message Editor in Workgroup works as follow:

HL7 Message Editor - Overview

Load HL7 messages

  • On the menu bar, go to FILE, Open…
  • In the open dialog, select HL7 logs with which you want to work.
    • Click the “Browse My Computer” link to select HL7 logs from your computer’s file system.

The selected HL7 messages will be loaded in the Messages tab.

Select a Profile for validation

Using a profile in the message editor will enable the message validation feature. The message validation will compare the HL7 messages against the profile in order to flag conformance gaps. Such gaps could come from:

  • Invalid message structure
  • Invalid data format

De-identify messages

Click to de-identify current messages. After the de-identification process is complete, the de-identified messages will replace your current loaded messages. Take a look at the De-identification Concepts to understand this process.

Explore message definition

When you are analyzing a message log, you sometimes need to quickly capture an overview of a message or segment.

From there you can show/hide:

  • Message Content.
  • Missing Elements: Segment/field/component/sub-component that are defined in the profile but don’t exist in the message’s content.
  • Required Elements: Field/component/sub-component that are mark as “R – Required” or “RE – Required but may be empty” in the profile.
  • Non-Required Elements: Field/component/sub-component that are not mark as “R – Required” or “RE – Required but may be empty” in the profile.

Contextual actions on selected elements

If you right-click an element in the Messsages Structure/Messages or Validation tab, a contextual menu will open. It contains the available actions for the selected element.

  • Create New Message: You can insert a new message before or after the selected message.
  • Send Message: You can send selected messages to a network connection and view the ACK content.
  • View Specification: Open the segment/field/component/sub-component in the Profile Editor.
  • View Values: Open the Table Library.
  • Message Structure: Open a dialog containing the message definition.
  • Find: Find a value in the message.
  • Replace: Replace a value in the message.
  • Save Messages: Save messages as .HL7.XML or. CSV format

Filter & Sort messages

Please refer to the Search and Filter Messages documentation to work with Data Filters and Sort Queries.

Highlight conformance gaps

The Message Editor tool lets you compare an HL7 message against a profile in order to flag conformance gaps. This is useful when you need to troubleshoot data flow in a live interface that has been documented in Caristix Workgroup. Validation tab displays conformance gaps flagged by the application.

Search and Filter Messages

Introducing Search and Filter Functionality in Caristix Workgroup Software

Caristix Workgroup helps interface analysts, engineers, and technical support team members to quickly find HL7 data needed for interfacing tasks and customer service. It provides the following features and functionality:

Managing Search and Filter Rules

Search and Filter Rules File DocumentPackageType_SearchAndFilterRules

You can save your searches and filters as a file. A Search and Filter Rules File is used to persist Data Filters, Sorts Queries and Data Distribution entries for reuse. 

Open a Search and Filter Rules File

  1. Click FileOpenSearch and Filter Rules… . The application opens a dialog box to select a Search and Filter Rules file (*.cxf).
  2. Select a Search and Filter Rules file and click Open.

* You can also open a Search and Filter Rules file by right-clicking anywhere in the Data Filters, Sorts or Data Distributions section and click the “Open Search and Filter Rules…” menu. 

Save a Search and Filter Rules File

  1. Click FileSaveSearch and Filter Rules… .
  2. Enter a file name.
  3. Click Save.
  4. Your file is saved as a .cxf file.

* You can also save a Search and Filter Rules file by right-clicking anywhere in the Data Filters, Sorts or Data Distributions section and click the “Save Search and Filter Rules…” menu. 

Recent Search and Filter Rules File

If you’ve already opened a Search and Filter Rules file, it will be added to the recent files in order to be quickly accessible. To open a recently opened file…

  1. Click FileRecent Search and Filter Rules[File name]

Working with Logs, Trigger Events and Segments

Logs

  • Click the Logs tab. Then click checkboxes to select or unselect specific log files.

Pinpoint_Filter_Logs

Check “Use Large File mode”  when loading files above 10MB in size. (This option will deactivate the Sort, Replace and Edit Message features.)

Trigger Events

  • Click the Trigger Events tab. Then click checkboxes to select or unselect specific trigger events.

Pinpoint_Filter_TriggerEvents

  • Selected messages appear automatically in the Messages area.

Segments

  • Click the Segments tab. Then click checkboxes to select or unselect specific segments.

Pinpoint_Filter_Segments

  • Selected segments appear automatically in the Messages area.

Working with Data Filters

Data Filters

Data filters let you set up queries to find messages containing specific data such as patient IDs, names, and order types codes. Queries can filtered on specific message elements: segments, fields, components, and sub-components.

  • Messages are composed of segments.
  • Segments contain fields.
  • As an option, fields can contain components.
  • As an option, components can contain sub-components.

Pinpoint_Filter_Data

Select Message Data to Build Filters

This is the recommended method for building data filters. Once you’ve built a query, you can then modify the Filter Operators to change your filter criteria.

  • In the Messages area, look for the field containing the data you want to filter. It could be a patient name, a date, a location, or another string.
  • Right-click within the field. A menu appears.
  • Click Add Data Filter.
  • The filter is automatically created within the Data Filters tab, and the data you filtered is highlighted within the Messages area. The default filter operator (“is” “=”) is applied.

    Workgroup_Filter_Data2

    Build Data Filters Directly

This is an alternate method for building data filters and is helpful when applying complex filter operators.

  • In the Data Filters area, click the Add button.
  • Click on each of the Segment, Field, Component, and/or the sub-component dropdown lists to configure the message structure you’re filtering for. (Click image to enlarge.)
    Pinpoint_Filter_Data3
  • Apply operators. Select “IS” if you want your results to INCLUDE the data you’ll be filtering. Alternatively, select “IS NOT” if you want your results to EXCLUDE the data you’ll be filtering.
  • Continue applying operators. Select = , likepresent, or empty, depending on the query type. See Data Filter Operators for more detail.
  • In the Criteria field, type the data you want to filter. Press the Enter key. Results automatically appear in the Messages area.

Add Filters from the Message Definition Tree

You can also add filters from the Message Definition tree:

  • Right-click any messages in the Messages area, then click “Message Definition…“.

SearchAndFilter_ShowMessageDefinition_v3

  • Navigate to the field you want to add.
  • Right-click the field and select “Add Data Filter“. 

SearchAndFilter_MessageDefinition_AddFilter_v3

View & Edit Metadata

From the messages area, you can also view and edit the segment/field definition and legal values (if the field is linked to a table). 

Case Sensitivity

Data filter queries can be made case-sensitive. This is helpful when you need to identify data that might have been entered in all caps (JOHN SMITH) instead of title case (John Smith).

  • Click ToolsOptions, and go to the Settings tab.
  • Check or uncheck the Make data filters case-sensitive checkbox as needed.

Global Find

You can create filters that query the entire log, instead of a single segment or field. Simply omit the segment and field from the filter. The results in the Messages area cover all occurrences of the value you specified in the filter.

Pinpoint_Filter_Data4

 Basic/Advanced Mode

While editing your filters, you can switch between Basic and Advanced Mode. Advanced Mode shows you advanced settings for your filters. These settings help you to construct more complex filters using AND/OR operators and parentheses for nesting. Otherwise, each filter will be applied one after the other.

If your filters contains advanced settings and you switch back to the Basic Mode, these settings will be lost. 

Advanced Mode

Pinpoint_Filter_Data_AdvancedMode

In this example, we want to create filters to get messages where (MSH.3 = MyApplication) and (PID.2.1 = 54738474) or (PID.18 = P5847373).

These filters will include the following messages:

 Pinpoint_Filter_Data_IncludedMessagesThey will exclude these messages:

Pinpoint_Filter_Data_ExcludedMessages

Data Filter Operators

Data Filters in Workgroup

Data filters let you select a subset of messages from the logs you load in Workgroup. The operators let you build filter queries, ranging from simple to complex. The most basic operator set consists of the us of “is” and “=”.

Pinpoint_Filter_Operator

These are the default operators in the Add Data Filter command, available on the right-click dropdown menu in the Messages area.

The other data filter operators let you build sophisticated filters for analyzing the HL7 data in your log. (Learn how data filters work in the section on Working with Data Filters.)

Operators List

Operator

Action

isIncludes messages that contain this data
is notExcludes messages that contain this data
=Covers messages with an exact match to this data (this is like putting quotation marks around a search engine query)
<Less than. Covers filtering on numeric values.
<=Less than or equal to. Covers filtering on numeric values.
>Greater than. Covers filtering on numeric values.
>=Greater than or equal to. Covers filtering on numeric values.
likeCovers messages that include this data. Covers filtering on numeric values.
presentLooks for the presence of a particular message building block (such as a segment, field, component, or sub-component)
emptyLooks for an unpopulated message building block (such as a segment, field, component, or sub-component)
inBuilds a filter on multiple data values in a message element rather than just one value.
in tableLooks if the data is in a specific table of the referenced Profile.
matching regex

Use .NET regular expression syntax to build filters. For advanced users with programming backgrounds. Learn more about regular expressions here:

Working with Sort Queries

Sorting HL7 Messages

Sort queries sort a log on a message element (segment, field, component, or subcomponent).

Sorting data is useful when you want to group messages by criteria such as patient name, date, or location.

Pinpoint_Order1

This sort on MSH 6 reorders messages by the name of the receiving facility, in this case, a patient care location.

Select Message Data to Build Sort Queries

This is the recommended method for building sorts. Once you’ve built a query this way, you can modify the Filter Operators to change your filter criteria.

  1. In the Messages area, look for the field containing the data on which you are sorting. It could be a patient name, a date, a location, or another string.
  2. Right-click within the field. A menu appears.
  3. Click Add Sort.
    Pinpoint_Order2
  4. The sort query is automatically created within the Sort tab, and the reordered data is displayed within the Messages area.

Build Sort Queries Directly

This is an alternate method for building sort queries.

  1. In the Sorts tab, click the + Add button.
  2. Click on each of the SegmentFieldComponent, and/or Subcomponent dropdown lists to select the element on which you are sorting. (Click image below to enlarge.)
    Pinpoint_Order1
  3. Apply the Order operator, selecting “ascending” or “descending” as needed.

Add Sort Queries from the Message Definition Tree

You can also add sort queries from the Message Definition tree. To do so:

  • Right-click any messages in the Messages area, then click “Message Definition…“.

SearchAndFilter_ShowMessageDefinition_v3

  • Navigate to the field you want to add.
  • Right-click the field and select “Add Sort“. 

SearchAndFilter_MessageDefinition_AddFilter_v3

Data Distribution

Understanding Data and Content in HL7 Messages

The Data Distribution feature displays the data values in a field. For instance, it helps you quickly figure out what codes are used in a specific field or how often a specific code is used.
Data Distribution can also help you analyze how one field can impact other fields in terms of data and content. With Data Distribution, for example, it’s possible to get the list of lab result codes for each lab request codes within a set of sample messages.

All charts and tables can be copied and pasted to Word and Excel.

Task #1: Get Data Distribution of a Field

  • In the Messages section, right-click on the field you want to analyze further (in this example, we chose AL1.2 Allergen Type Code ). A menu appears.
  • Select Add Data Distribution.
  • The selected field is added to the Data Distributions tab.
  • A pie chart appears in the Data Distribution Results tab.

The pie chart displays the values that populate the field, as well as how often those values occur in the field.

Task #2: Create a report based on Data Distribution Fields

  • In the Messages section, right-click on the field you want to add (in this example, we chose MSH.4 Sending Facility). A menu appears.
  • Select Add Data Distribution.
  • The selected field is added to the Data Distributions tab.
  • Drag the MSH.4 – Sending Facility to the group zone.

The report displays which Allergen Type Code is sent, grouped by Sending Facilities.

  • You can save this report by right-clicking the Data Distribution Result grid and selecting Save Report.

Task #3: Analyze the Impact of One Field on Others

  • In the Data Distribution Results tab, click on Data Distribution Viewthen Show All (by default, only the top 10 most frequently used values are listed).

Add Data Distribution Fields from the Message Definition Tree

You can also add data distribution fields from the Message Definition tree:

  • Right-click any messages in the Messages area, then click “Message Definition…“.

SearchAndFilter_ShowMessageDefinition_v3

  • Navigate to the field you want to add.
  • Right-click the field and select “Add Data Distribution“.

SearchAndFilter_MessageDefinition_AddFilter_v3

Add a Data Filter from the Data Distribution Table View

From the Data Distribution table view, you can add a Data Filter in order to find messages containing specific data:

  1. Add a Data Distribution field.
  2. Go to the Data Distribution tab and switch to the “Table View“.
  3. Right-click any value you want to search for, and click Add Data Filter.
  4. A new Data Filter will be added.

Message Prefixes

Handling Non-Standard Delimiters

Some interfacing technologies output non-standard message logs. In a raw state, they may be impossible to parse against an HL7-compliant standard. By adding a message prefix representing the extraneous data, you can load these logs in Pinpoint.

To add a message prefix:

  1. In the Main Menu, go to ToolsOptions. Open the Delimiters tab.
  2. Select the Use message beginning delimiter checkbox.
  3. Enter the string that precedes the HL7 message (beginning of message delimiter).
  4. If this string is expected at the beginning of a new line, check the Beginning of line checkbox.
  5. By checking the Use custom regex checkbox, Pinpoint will treat this string as a regular expression instead of a static string.

You can also use message and segment ending delimiters.

Learn more about regular expressions here:

Workgroup_Options_Delimiters

Reference Profile

Using Profiles with Workgroup

Workgroup works by parsing messages against a reference profile (or specification). The default setting is to parse against the HL7 version specified in the Version ID field of the MSH segment. However, you can also set the reference profile manually, as follows:

  1. In the main menu, select ToolsOptions.
  2. Open the Reference Profile tab.
    Workgroup_Options_ReferenceProfile
  3. Make sure the Use Reference Profile checkbox is checked.
  4. In the Profiles tab, select a profile. Click OK. This will change the reference profile used to parse logs loaded into Workgroup.

The default profile library is in %AllUsersProfile%\Application Data\Caristix\Common\Library\library.cxl. If you want to load an alternate profile library, click the Browse button.

Settings

Settings

Make data filters case-sensitive

You can make your data filter queries case-sensitive.  This is helpful when you need to identify data that might have been entered in all caps (JOHN SMITH) instead of title case (John Smith).

Add metadata to result file

This option will generate extra metadata information when you save the resulting messages. This metadata contains the filters, sort and file sources information.

Automatically apply changes

By default, Search and Filter Messages will automatically apply changes that you make on filters. If you uncheck this option, changes to filters will only be applied when clicking on the Apply Changes button.

Find and Replace

Find and Replace

You can find and replace values in your messages. The Use filters option lets you find and replace within a field.

Find

  1. Click on TOOLS -> Find
  2. Enter a value, example: App-32
  3. Option: select Use Filters to limit the search to a specific field
  4. Option: Match case, if this option is checked, App-32 will be different than apP-32
  5. Option: Search up, if this option is checked, the search will find the previous occurrence of the value relative to the current cursor position
  6. Option: Use Regex, use .NET regular expression syntax to build filters. For advanced users with programming backgrounds. Learn more about regular expressions here:
  7. Click Find Next to reach the next occurrence of the specified value relative to the current cursor position

Replace

You can also use the Replace tab and specify a replacement value

  • Click Replace to replace currently highlighted occurrence
  • Click Replace All to replace all occurrences of that value

Workgroup_FindAndReplace

Validate Messages

The Message Validation tool lets you compare an HL7 log against a profile in order to flag conformance gaps. This is useful when you need to troubleshoot data flow in a live interface that has been documented in Caristix Workgroup.

  1. From the Main Menu, click HL7 MESSAGING, Validate… A new Message Validation window appears.
  2. [Optional]You can choose the profile from which your messages will be validated. Otherwise, the default profile will be used.
  3. Click File, Open messages. A new window appears.

    Workgroup_HL7MessageValidation_OpenMessages

  4. Click Add… and navigate to the log file you want to add. Click Open in the file folder pane.
  5. In the Open log files window, click the Next button. The messages load in the Message Validation window.

    Workgroup_HL7MessageValidation

  6. The Warnings pane displays conformance gaps flagged by the application.

From the Message Validation tool, you can right-click any messages and open the Message Editor tool, or view the Message Definition

Play and Route Messages

Introducing Listener and Router Functionality in Caristix Workgroup

Workgroup includes Message Player, a utility you can use to send and receive HL7 messages. A few uses for Message Player:

  • Validate network connectivity between systems
  • Simulate a system receiving messages from your sending system
  • Simulate a system sending messages to your receiving system
  • And much much more…

The main features are:

Workgroup_MessagePlayer

Play (Send) Messages

Router Functionality

You can send HL7 messages stored in flat files to another system. To send messages:

  • Add the file(s) containing messages you want to play to the playlist
    • In the playlist section, click Add..
    • Pick the file(s) you want to play
    • Click Open
  • Play the messages
    • Click the Play button
    • Provide the IP address and port of the system listening to the messages (see the configuration section for more details)
    • Click OK
  • Click Stop at any time to interrupt transmission.

The router will send HL7 messages contained in playlist file(s). Messages will be sent one at a time, with a wait for acknowledgment (ACK/NACK) between messages.

Configuration

The Play Configuration Panel

Unless deactivated, the Play configuration panel should be triggered each time you hit the Play button.
You can also access the panel by clicking the gear icon on the upper-right corner of the main window, then selecting the Play tab.

The configuration panel contains 2 items:

  • IP Address: This is the IP address of the system you want to send messages to. The IP address should be a series of numbers (example: “192.168.123.123”). If you want to send messages to your local host, provide your IP address (localhost 127.0.0.1 is not supported).
  • Port: This is the port the receiving system is listening to. Make sure that firewalls are configured correctly in order to enable Message Player to establish a TCP connection to the destination server on that port. Contact your organization’s system administrator for help with this task.

Record (Receive and Store) Messages

Listener Functionality

You can receive HL7 messages from a system and store them in flat files. To record messages:

  • Click Record button
  • Provide your IP address and port you want to listen to (see the configuration section for more details)
  • Choose how you want the message to be stored (file split mode)
  • Click OK
  • Enter the file name and file type
  • Click Save

The recording starts. Click Stop at any time to interrupt recording.

The router will listen to HL7 messages and store them in files based on the split mode you select. For each message received, an acknowledgment (ACK/NACK) will be sent as a response.

Configuration

Configure the Record Panel

Unless deactivated, the Record configuration panel should be triggered each time you hit the Record button.

You can also access the panel by clicking the gear icon on the upper-right corner of the main window, then selecting the Record tab.

The configuration panel contains 3 items:

  • IP Address: This is your IP address. The IP address should look like a series of numbers (example: “192.168.123.123”). Localhost 127.0.0.1 is not supported.
  • Port: This is the port you want to listen to. Make sure that firewalls are configured so that the sending system can establish a TCP connection to Message Player on that port. Contact your organization’s system administrator for help with this.

File Split Mode

  • All in one file: No split occurs. This is the optimal setting when you need to record a small number of messages.
  • Size: Split occurs when file reach a preset size (in MB). A number will be appended to the file name.
  • Message count: Split occurs when the file contains a preset number of messages. A number will be appended to the file name.

XML Document

Caristix Workgroup comes with several features that help you with XML documents (ex: CDA, CCD documents).

De-identifying XML Document

Overview

To understand the de-identification concept, please read the following chapters:

De-identification in Workgroup works as follows:

XML De-identification overview

Load XML Documents

  • On the menu bar, go to FILE, Open, Messages…
  • In the open dialog, select XML documents that you want de-identified.
    • Click the “Browse My Computer” link to select XML documents from your computer’s file system.

The selected XML documents will be loaded in the Message Example tab.  The Original pane displays the XML documents you loaded while the De-identified pane displays the de-identified XML documents. The split screens scroll synchronously so that the data displayed is mirrored in the original and de-identified panes.

The fields set for de-identification are highlighted in red for easy visibility.

Add & Edit De-identification Rules

On the left side of the screen are the de-identification rules listed under the Fields tab.

Fields Tab

  • Checkbox: The checkmark indicates an active rule. Uncheck to deactivate a rule.
  • X-PATH: Enter the X-Path of the element/attribute’s value to be de-identified.
  • ID: Sets the primary key.  Check it for any field uniquely identifying the patient.
    • If no primary key is set, a new patient identity will be created for each document.

Value Generator Tab

Add or Remove Rules

To add a new de-identification rule

  • Click the plus sign at the bottom of the list of rules. A new line will appear.

-Or-

  • Navigate through your XML Document in the Original pane and right-click the element/attribute’s value that you want to de-identify. Next, click the De-identify field action.

To remove a rule, click the trashcan at the end of the line.

Open & Save De-identification Rules

To re-use existing de-identification rules

  • On the menu bar, go to FILE, Open, De-id Rules…
  • In the open dialog, select the XML De-identification Rules File (.cxdx) to be loaded.

-Or-

  • In the de-identification rules list, right-click and click the Open De-identification Rules… action.

To save your de-identification rules

  • On the menu bar, go to FILE, Save, De-id Rules…
  • In the open dialog, select the destination folder and file name of your new XML De-identification Rules File.

-Or-

  • In the de-identification rules list, right-click and click the Save De-identification Rules… action.

View Example and De-identify XML Documents

Once you have created and configured all the rules applicable to the XML documents to be de-identified, click View Example at the bottom of the left hand pane. A preview of the de-identified documents will appear. Scroll through the documents in the viewing pane to verify the potential results of the de-identification process.

Once reviewed and after applying any changes:

  • Click De-identify at the bottom of the left hand screen to the right of View Example to process and save the de-identified XML documents.
  • A Select Folder dialog box will appear.
  • Select the destination folder for your de-identified XML documents.
  • Click OK.
  • A window will open, tracking the progress of the process.

Search and Filter XML Document

Overview

Caristix Workgroup helps interface analysts, engineers, and technical support team members quickly find data needed for interfacing tasks and customer service.

Search and Filter in Workgroup works as follows: XML Search and Filter Overview

Load XML Documents

  •  On the menu bar, go to FILE, Open, Messages…
  • In the open dialog, select XML documents in which you want to search.
    • Click the “Browse My Computer” link to select XML documents from your computer’s file system.

The selected XML documents will be loaded in the Messages tab.

The fields that match your search and filter rules are highlighted in red for easy visibility.

Add & Edit Search and Filter Rules

On the right side of the screen are the search and filter rules listed under the Data Filters tab.

Data Filters Tab

  • Checkbox: The checkmark indicates an active rule. Uncheck to deactivate a rule.
  • X-PATH: Enter the X-Path of the element/attribute’s value that you want your criteria to match.
  • Is/Is Not: Select “IS” if you want your results to INCLUDE the data you’ll be filtering. Alternatively, select “IS NOT” if you want your results to EXCLUDE the data you’ll be filtering.
  • OPERATOR: Select “=” , “like”, “present”, or “empty”, depending on the query type. See Data Filter Operators for more detail.
  • CRITERIA: Type the data you want to filter.

Basic/Advanced Mode

While editing your filters, you can switch between Basic and Advanced Mode. Advanced Mode shows you advanced settings for your filters. These settings help you to construct more complex filters using AND/OR operators and parentheses for nesting. Otherwise, each filter will be applied one after the other.

If your filters contains advanced settings and you switch back to the Basic Mode, these settings will be lost.

Add or Remove Rules

To add a new search and filter rule

  • Click the plus sign at the bottom of the list of rules. A new line will appear.

-Or-

  • Navigate through your XML Document in the Messages tab and right-click the element/attribute’s value that you want to filter. Next, click the Add Data Filter action.

To remove a rule, click the trashcan at the end of the line.

Open & Save Search and Filter Rules

To re-use existing search and filter rules

  • On the menu bar, go to FILEOpenSearch and Filter Rules…
  • In the open dialog, select the XML Search and Filter Rules File (.cxfx) to be loaded.

-Or-

  • In the search and filter rules list, right-click and click the Open Search and Filter Rules… action.

To save your search and filter rules

  • On the menu bar, go to FILESaveSearch and Filter Rules…
  • In the open dialog, select the destination folder and file name of your new XML Search and Filter Rules File.

-Or-

  • In the search and filter rules list, right-click and click the Save Search and Filter Rules… action.

Edit and Validate XML Document

Overview

The Message Editor tool lets you edit content and compare an XML document against a profile in order to flag conformance gaps. This is useful when you need to troubleshoot data flow in a live interface that has been documented in Caristix Workgroup.

Message Editor in Workgroup works as follow:

XML Editor Overview

Load XML Document

  • On the menu bar, go to FILEOpen…
  • In the open dialog, select the XML document with which you want to work.
    • Click the “Browse My Computer” link to select an XML document from your computer’s file system.

The selected XML document will be loaded in the Message tab.

Select a Profile for Validation

Using a profile in the message editor will enable the message validation feature. The message validation will compare the XML document against the profile in order to flag conformance gaps. Such gaps could come from:

  • Invalid schema structure
  • Invalid schema validation
  • Invalid schematron validation

Edit Attributes and Values

At the right side of the message tab, you will be able to edit the selected node’s attributes or content.

Using a profile will allow the message editor to provide the list of allowed attribute names and values.

View Conformance Gaps

If enabled, the validation tab displays conformance gaps. The tool-tip will provide you detailed information about the error.

Double-click a line to navigate through the error in your XML document.

X-Path and Namespaces

What’s an X-Path

X-Path is a language that describes a way to locate and process items in XML documents by using an addressing syntax based on a path through the document’s logical structure or hierarchy. X-Path uses path expressions to select nodes or node-sets in an XML document.

Using the following XML document:

<item>
   <book>
      <title>Cheaper by the Dozen</title>
      <number type=”isbn”>1568491379</number>
      <author>
         <name>John Doe</name>
      </author>
   </book>
   <note>
      <p>This is a funny book!</p>
      <author>
         <name>Jake McEvoy</name>
      </author>
   </note>
</item>

Download XML document

You can use the X-Path expression “/item/book/author/name” to select the element

<item>
   <book>
    …
      <author>
         <name>John Doe</name>
       …
</item>

And the expression “/item/book/number/@type” to select the attribute type=”isbn”

<item>
   <book>
   …
      <number type=”isbn”>1568491379</number>
      …
</item>

Absolute vs relative X-Path

An absolute X-Path uses the complete path from the root element to the desired element (item>book>author>name). But, if you’d like to select both book’s author and note’s author, using a single X-Path query, you’ll have to use the relative X-Path syntax “//author/name

<item>
   …
   <author>
      <name>John Doe</name>
   </author>
   …
   <author>
      <name>Jake McEvoy</name>
   </author>
   …
</item>

A relative X-Path is a way to select an element no matter its location in the XML document.

Namespaces

XML namespaces are used for providing uniquely named elements and attributes in an XML document. An XML instance may contain element or attribute names from more than one XML vocabulary. If each vocabulary is given a namespace, the ambiguity between identically named elements or attributes can be resolved. In the following example, the prefix “lib” is used for the “library” vocabulary, and the “rev” prefix is used for the “review” vocabulary.

<item>
   <book xmlns:lib=”urn:vocabulary.library”>
      <title>Cheaper by the Dozen</title>
      <number type=”isbn”>1568491379</number>
      <lib:author>
         <lib:name>John Doe</lib:name>
      </lib:author>
   </book>
   <note xmlns:rev=”urn:vocabulary.review”>
      <p>This is a funny book!</p>
      <rev:author>
         <rev:name>Jake McEvoy</rev:name>
      </rev:author>
   </note>
</item>

Download XML document

X-Path and Namespaces

When a namespace is used in an XML document, you will have to consider the qualified name in an X-Path query to get the desired element. A qualified name contains the namespace-prefix and the name of the element or attribute.

Using the X-Path “//lib:author/lib:name”, you will only select the name element corresponding to the “library vocabulary”. It won’t select the “review’s author”.

<item>
   <book xmlns:lib=”urn:vocabulary.library”>
      <title>Cheaper by the Dozen</title>
      <number type=”isbn”>1568491379</number>
      <lib:author>
         <lib:name>John Doe</lib:name>
      </lib:author>
   </book>
   <note xmlns:rev=”urn:vocabulary.review”>
      <p>This is a funny book!</p>
      <rev:author>
         <rev:name>Jake McEvoy</rev:name>
      </rev:author>
   </note>
</item>

And, you can’t just ignore the prefix and use “//author/name”, because it would not match an existing element. There is a workaround explained later.

Default namespace

Sometimes, documents contain a declaration of one or more “default namespace”. A default namespace is declared without any prefix (xmlns=”…”, instead of xmlns:pfx=”…”). The scope of a default namespace declaration extends from the beginning of the start-tag in which it appears to the end of the corresponding end-tag, excluding the scope of any inner default namespace declarations. A default namespace declaration applies to all unprefixed element names within its scope.

<item>
   <book xmlns=”urn:vocabulary.library>
      <title>Cheaper by the Dozen</title>
      <number type=”isbn”>1568491379</number>
      <author>
         <name>John Doe</name>
      </author>
   </book>
   <note xmlns=”urn:vocabulary.review>
      <p>This is a funny book!</p>
      <author>
         <name>Jake McEvoy</name>
      </author>
   </note>
</item>

Download XML document

In this particular case, no prefix is used to explicitly distinguish identically named elements or attributes. But only prefixes mapped to namespaces can be used in X-Path queries. This means that if you want to query against a namespace in an XML document, even if it is the default namespace, you need to define a prefix for it. ref: https://docs.microsoft.com/en-us/dotnet/standard/data/xml/xpath-queries-and-namespaces

That’s why the X-Path “//author/name” would not return any value. A prefix must be bound to prevent ambiguity when querying documents with some nodes, not in a namespace, and some in a default namespace.

The software will add “temporary” namespace automatically for each declared default namespace in your documents. Those temporary namespaces will be “ns1, ns2, ns3,…”. So, after loading the XML document in Caristix software, you will see something like:

<item>
   <ns1:book xmlns=”urn:vocabulary.library”>
      <ns1:title>Cheaper by the Dozen</ns1:title>
      <ns1:number type=”isbn”>1568491379</ns1:number>
      <ns1:author>
         <ns1:name>John Doe</ns1:name>
      </ns1:author>
   </book>
   <ns2:note xmlns=”urn:vocabulary.review”>
      <ns2:p>This is a funny book!</ns2:p>
      <ns2:author>
         <ns2:name>Jake McEvoy</ns2:name>
      </ns2:author>
   </ns2:note>
</item>

The “ns1” is the temporary namespace prefix for the “urn:vocabulary.library” namespace and “ns2” is the temporary namespace prefix for the “urn:vocabulary.review” namespace. That way, you can select “//ns1:author/ns1:name” and “//ns2:author/ns2:name” without ambiguity.

But, what if I want to select both in a single request?

Take a look at the X-Path syntax reference so see what can be done
https://www.w3schools.com/xml/xpath_intro.asp
https://devhints.io/xpath

Using those references, you can use the existing functions to build an X-Path that will match both elements “//*[local-name()=’author’]/*[local-name()=’name’]”. In this particular case, the local-name() function returns the element name, without the prefix.

<item>
   <ns1:book xmlns=”urn:vocabulary.library”>
      <ns1:title>Cheaper by the Dozen</ns1:title>
      <ns1:number type=”isbn”>1568491379</ns1:number>
      <ns1:author>
         <ns1:name>John Doe</ns1:name>
      </ns1:author>
   </ns1:book>
   <ns2:note xmlns =”urn:vocabulary.review”>
      <ns2:p>This is a <ns2:i>funny</ns2:i> book!</ns2:p>
      <ns2:author>
         <ns2:name>Jake McEvoy</ns2:name>
      </ns2:author>
   </ns2:note>
</item>

Cross Format De-Identification

Cross Format De-Identification introduction

It is now possible to de-identify PHI with both HL7 and XML consistently. Three simple steps are required to begin.

  1. Use the same dictionary.
      Create a new dictionary
    1. In both apps: Click TOOLS → Option… → Settings → Enable Re-apply rules and replacement data across multiples files.
    2. For a dictionary called CrossFormatDeid. Write the file name in both app: C:\ProgramData\Caristix\Carisitx Workgroup\Temp\CrossFormatDeid.dic
  2. Make sure the ID checkbox is set in both applications. It is used by the dictionary to uniquely identify patients.
      Scroll right in the field section in Messaging v2 and in the x-path section in Messaging v3 to find the ID checkboxes.
  3. The new rule property ‘Name’ is used to link rules for HL7 with rules for XML.
      So, to de-identify an XML field the same way it was in HL7, use the same rule name for both rules.
    1. For instance, you can name the rule for HL7 PID.5.2: Patient Family Name. Use the same name for the XML //”[local-name()=’name’]/*[local-name()=’family’] rule. This way, fake patient name will be reused from the dictionary.
    2. For the same de-identification, write the same name to both rules.
    3. All rules without a name or do not have a twin in the other format will not be crossed.

Test Scenarios

Testing is conducted at at different phases in the interface lifecycle: during configuration and development; during the formal validation phase; and during maintenance.

You run tests to avoid introducing new problems – you check and test your code to make sure not injecting errors. This is true both during interface development or configuration and while in maintenance mode. This testing helps you determine whether or not the interface makes sense and meets your requirements.

Workgroup is designed to help interface analysts and engineers  validate HL7 interfaces. The software provides the following features and functionality:

  • Test suite structure to manage test plan
  • Ability to load conformance profiles
  • Automated creation of test message
  • Database integration
  • Automated and manual validation
  • Inbound and outbound message validation
  • .exe or batch file validation

Workgroup facilitates testing in a number of ways including:

Suite

Definition

Suites are analogous to a test plan. A suite contains all of the test scenarios and workflows that you will run in order to validate that an interface works.  A suite manages a collection of test scenarios (test cases). 

Suites are files with the .cxs extension and are represented in the document library by theScenarioSuite_32icon.

Create a new suite

  1. In the application, go to the main menu and click TEST, New 
    A new Scenario Editor window opens.
  2. In  the Documentation tab, type a name for the suite.  It will be the file name.
  3. Optionally, add a suite description, a list of requirements the suite covers, or any useful notes.
  4. Save the file by clicking FILE, Save
    An empty suite is now created.

Next

Configuration & Results

Configuring a Suite

On the Configuration tab, you set timing and execution parameters for your suite.

These settings let you run scenarios contained in a suite several times, in a loop. For instance,  you can set a scenario to execute 100 times with 100 different patient names.

  • Timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a a pause between suite executions.
  • Instantiate variables:  Check this box to populate variables with different test values at the Suite level. For instance, if you need to execute the test with 100 different patient names, check this box.

Results

After the scenario suite has been executed once, a new tab will be displayed (Results). The Results tab contains the detailed information about what was executed for any specific execution. If variables were used in configuration or validation, you will see their instantiated values.

Select a result to see the detailed information. You can also perform action (right-click) with a result, such as:

  • Re-Run: The scenario suite will be re-executed using the exact same values used for the previous execution.
  • Create Scenarios from Result: The scenarios result will be converted as new scenarios, using the exact same values used for the execution.
  • Save as New Scenario Suite: Save a new Scenario Suite containing the scenarios, using the exact same values used for the execution.

Configuration & Results

Configuring a Scenario

On the Configuration tab, set timing and execution parameters at the Scenario level.

These settings let you run tests several times, in a loop.

  • Timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a a pause between scenario executions.
  • Execution: Specify the number of times to execute the scenario. 
  • Execution probability: Specify the expected execution probability of a scenario. The feature enables your validation to skip the scenario from time to time. For instance, at 50% probability over 10 executions, the scenario will run 5 times when the suite is executed.  Use this feature when some of the steps in your scenario are optional.  You still need to validate both cases (when the optional step is executed and when it isn’t).  Adding execution probability allows you to validate both cases with only one scenario.
  • Instantiate variables:  Check this box to populate variables with different test values at the Scenario level. For instance, if you need to execute the test with 100 different patient names, check this box.

Results

After the scenario has been executed once, a new tab will be displayed (Results). The Results tab contains the detailed information about what was executed for any specific execution. If variables were used in configuration or validation, you will see their instantiated values.

Select a result to see the detailed information. You can also perform action (right-click) with a result, such as:

  • Re-Run: The scenario will be re-executed using the exact same values used for the previous execution.
  • Create Scenario from Result: The scenario result will be converted as a new scenario, using the exact same values used for the execution.
  • Save as New Scenario Suite: Save a new Scenario Suite containing the scenario, using the exact same values used for the execution.

Action

Definition

A scenario consists of a series of actions.  An action represents a single step in a specific workflow — for instance, the arrival of a patient.

Create a new action

  1. Select a suite, then a scenario.
  2. Right-click and select Add new Action.
    A new node is created at the end of the scenario. Drag and drop the new action to the right location if needed.
  3. In Documentation tab, type a name for the action.
  4. Optionally, add an action description, a list of requirements the action covers, or any useful note.

Next

Configuration & Results

Configuring an Action

On the Configuration tab, set timing and execution parameters at the Action level.

These settings let you run tests several times in a loop.

  • Timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
  • Execution: Specify the number of times to execute the action. 
  • Execution probability: Specify the expected execution probability of an action. This feature enables your validation to skip the action from time to time. For instance, at 50% probability over 10 executions, the scenario will run 5 times when the suite is executed.  Use this feature when some of the steps in your action are optional.  You still need to validate both cases (when the optional step is executed and when it isn’t).  Adding execution probability allows you to validate both cases with only one action.
  • Instantiate variables:  Check this box to populate variables with different test values at the Action level. For instance, if you need to execute the test with 100 different patient names, check this box.

 

Results

After the action has been executed once, a new tab will be displayed (Results). The Results tab contains the detailed information about what was executed for any specific execution. If variables were used in configuration or validation, you will see their instantiated values.

Select a result to see the detailed information. You can also perform action (right-click) with a result, such as:

  • Re-Run: The action will be re-executed using the exact same values used for the previous execution.
  • Create Scenario from Result: The action result will be converted as a new action, using the exact same values used for the execution.
  • Save as New Scenario Suite: Save a new Scenario Suite containing the action, using the exact same values used for the execution.

Task

Definition

Actions are made up of tasks. A task represents the smallest unit of work contained in a scenario. It could be an HL7 message exchange (an admit/visit notification), a database interaction (a query to the patient table), or a manual step requiring the user to interact with a 3rd party application.

Your test cases are based on a sequence of tasks.

Task types

There are several types of tasks. Each task type has its own behavior.

  • Send HL7 Message: Simulate a system sending an HL7 message to a host on a specific TCP port. The HL7 message is defined directly in the task. Validation can be done on the acknowledgment message that are sent back.
  • Send HL7 File: Simulate a system sending a file of HL7 messages to a host on a specific TCP port. Validation can be done on the acknowledgment messages that are sent back.
  • Receive HL7 Message: Simulate a receiving system listening for HL7 messages on a specific TCP port. Validation can be done on the messages received.
  • Read HL7 Message: Simulates a receiving system listening for HL7 messages on specific files. Validation can be done on the messages received.
  • Query Database: Query a database and validate the result. It could be a clinical application database or the internal integration engine database.
  • Execute Web-Service: An Execute Web Service task allows you to interact with a Web Service during a test.
  • Execute Command: Interact with other applications using command line tasks. For instance, call a cmd script to delete files or prepare content for subsequent tasks.
  • Execute Manual Task: Manual tasks pause the execution of the scenario and wait for manual input from the user. It could be an interaction with a 3rd party application or just a way to pause the execution so extra manual validation can be done.
  • JavaScript Task: JavaScript tasks run JavaScript code that is provided at the configuration stage. Additionally, JavaScript tasks can make use of our JavaScript API to access Caristix resources.

Results Tab

After the task has been executed once, a new tab will be displayed (Results). The Results tab contains the detailed information about what was executed for any specific execution. If variables were used in configuration or validation, you will see their instantiated values.

Select a result to see the detailed information. You can also perform action (right-click) with a result, such as:

  • Re-Run: The task will be re-executed using the exact same values used for the previous execution.
  • Create Task from Result: The task result will be converted as a new task, using the exact same values used for the execution.
  • Save as New Scenario Suite: Save a new Scenario Suite containing the task, using the exact same values used for the execution. 

Next

Fake Execution

Test Scenario JavaScript Engine API

The Javascript engine allows you to inject custom Javascript at different steps of a Task execution.

Fake Execution

You can toggle the “Fake Execution” mode on each task, which executes your custom Javascript code instead of performing the task as configured. That way, you can mock, for instance, a web service result to quickly develop your test cases, even if the real web service that would be used in tests is not ready to be used yet.

To use Fake Execution, call the “callback(result)” method, providing a string containing the fake result you want your task to have.

For each task type, a default fake execution script is provided. The default scripts are as follows.

Send Message Task / Send Task File

This script fakes the task’s execution as if the messages were successfully sent, and the configured connection endpoint returned an HL7-ACK.

callback(`MSH|^~\&|GHH LAB, INC.|GOOD HEALTH HOSPITAL|ADT1|GOOD HEALTH HOSPITAL|20210305104622||ACK^A01^ACK|ACK-MSG00001|T|2.5.1
MSA|AA|MSG00001
`);

Receive Message Task / Read File Task

This script fakes the task’s execution as if it received/read the HL7v2 messages provided in the callback method.

callback(`MSH|^~\&|ADT1|GOOD HEALTH HOSPITAL|GHH LAB, INC.|GOOD HEALTH HOSPITAL|198808181126|SECURITY|ADT^A01^ADT_A01|MSG00001|T|2.5.1
EVN||200708181123||
PID|1||PATID1234^5^M11^ADT1^MR^GOOD HEALTH HOSPITAL~123456789^^^USSSA^SS||EVERYMAN^ADAM^A^III||19610615|M||2106-3|2222 HOME STREET^^GREENSBORO^NC^27401-1020
NK1|1|JONES^BARBARA^K|SPO^Spouse^HL70063||||NK^NEXT OF KIN
PV1|1|I|2000^2012^01||||004777^ATTEND^AARON^A|||SUR||||7|A0|
`);

Query Database Task

This script fakes a database query result. A JSON Array with 2 entries is provided as a result. This mocks the following dataset

callback(`[
{
"column1": "value 1",
"column2": "value 2"
},
{
"column1": "value 3",
"column2": "value 4"
}
]`);

Column 1Column 2
Value 1Value 2
Value 3Value 3

Web Service Task

This script fakes the execution of the task as if an HTTP result is returned. A JSON object is provided, allowing you to mock the HTTP Response Status Code (200, 404, 500) and the response Body. The above script would return an OK – 200 status code with a JSON value in the response body.

callback(`{
"responseStatusCode": "200",
"responseBody": {
"resourceType": "operationOutcome"
},
}`);

HTTP Response Status: 200 (OK)
HTTP Body: { "resourceType": "operationOutcome" }

JavaScript Task

Definition

A JavasScript task executes JavaScript code using our JavaScript API.

Creating a new JavaScript task

Right-click the name of the parent Action the new task will be created in, and select Add New Task –> Execute JavaScript Task.

Create JavaScript Task

A new Task appears under the parent Action. Edit the task name as needed. Drag and drop to change the task order.

Configurating a JavaScript task

Any valid JavaScript can be executed in this task. Simply add the code you wish to execute to the code textbox in the configuration tab. You can also use our JavaScript API to manipulate Caristix-related resources.

To return a result for validation, use the callback() method. The callback() method takes a string as an argument and sets the value returned by the task when called.

The following is an example of a JavaScript task’s code. In the example, a GET request is sent to a public FHIR server, and the resulting bundle is returned for validation.

//Create an HTTP request using the provided HTTP GET method and full resource url,
// https://daas.caristix.com/fhir/Patient.
var request = HTTP.create('GET', 'https://daas.caristix.com/fhir_r4/Patient/');

//Add the Accept header with the value application/fhir+json to the request.
request.setHeader(‘Accept’, ‘application/fhir+json’);

//Send the HTTP request.
var result = request.send();

//Obtain the HTTP result’s body – a Bundle of Patient Resources.
var body = result.body;

//Return the body.
callback(body);

Send HL7 Message

Definition

A Send HL7 Message task simulates a system sending HL7 messages to a host on a specific TCP port. The HL7 messages are defined directly in the task. Validation can be done on the acknowledgment (ACK) messages that are sent back.

Create a new “Send HL7 Message” task

  1. Right-click the name of the parent Action the new task will be created in.
  2. Select Add New Task –> Send HL7 Message
  3. New Task appears under the parent Action.
  4. Edit the task name as needed.

A new task is added at the end of the current action. Drag and drop to change the task order.

Configuring a “Send HL7 Message” task

There are several options to control message format and destinations.

  1. Set timing: Specify the wait time before and after execution. Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
  2. Set the message destination: Default network connections are preconfigured destinations for messages. They contain a hostname (or IP address), a port and a connection timeout. Using default network connections lets you quickly change the testing environment by updating the default network connection instead of modifying all tasks individually. More details about how to define network connections are available on the Options page.
    • If “Use default connection” is checked, the message will be sent to the host and port defined in the default sending network connection.
    • If “Use default connection” is unchecked, pick a destination for the message to be sent to from the list.
  3. Connection attempts: Set the number of time the task will try to send the message to destination. The task will fail after unsuccessful attempts.
  4. Wait for response: Check if you want to wait for an ACK message after sending the message.
  5. Select the message format: Messages can be sent using the HL7-v2.x or XML encoding format.
    • HL7-v2.x format is the most popular and uses pipe (“|”) and caret (“^”) delimiters. This is the default.
  6. Sent message profile: Set the reference profile for the message to be sent. The reference profile will be used to show metadata related to the message (message structure, segment/field definition).
  7. Message to send: Enter the messages to send in the large text area.
    • If you don’t have messages available for testing, use the Generate message from Profile button to generate one. The message generated will require editing but will provide a framework to work with.
    • Messages can come directly from a variable. To do so, right-click the messages area, and select “Use Variable”. Then select the variable that generates the messages to be sent.
    • Messages can include variables which will be instantiated (filled out) at run time. For more details about variables, take a look at the Variables page.
      To add variables to the message:
      1. Move the mouse over the field content to be replaced by a variable.
      2. Right-Click and select Set Variable to Field…
      3. Select the variable and configure it if needed.
      4. Click OK.
    • Messages can also include values coming from another message in the scenario suite.
      1. Move the mouse over the field content to be replaced.
      2. Right-Click and select “Insert Criteria”.
      3. Configure the criteria field.
      4. Click Apply.

Note: If you’re using the XML format, you will need to open the XML Editor (click the Edit.. button) to be able to insert variables or edit the document.

  1. Specify the segment delimiter: By default the HL7 standard delimiter (Carriage Return) will be used but you can select another if needed by the receiving system.
  2. Save message sent to file: Sent messages are stored in the execution results and in the execution reports. If needed, sent messages can be stored in a file at execution time. Stored messages will be exactly what was sent, with actual instantiated values for each variable.
    • If checked, the messages will be stored in the file path provided. Variables can be used to build the file path.
    • If unchecked, the messages will not be stored on disk. This is the default.

Add validation rules

If the receiving system is configured to return message acknowledgement, each sent message would be responded to with an ACK or a NACK message. Validations can be added to the task to confirm the ACK/NACK response is as expected. Several validation types can be added:

Send HL7 File

Definition

A Send HL7 File task simulates a system sending HL7 messages from a file to a host on a specific TCP port. Validation can be done on the acknowledgment messages that are sent back.

Create a new “Send HL7 File” task

  1. Right-click the name of the parent Action the new task will be created in.
  2. Select Add New Task –> Send HL7 File
  3. New Task appears under the parent Action.
  4. Edit the task name as needed.

 A new task is added at the end of the current action. Drag and drop to change the task order.

Configuring a “Send HL7 File” task

There are several options to control where the messages in the file are sent.

    1. Set timing: Specify the wait time before and after execution. Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
    2. Set the message destination: Default network connections are preconfigured destinations for messages. Using default network connections let you quickly change the testing environment by updating the default network connection instead of modifying all tasks individually. More details about how to define network connections are available on the Options page.
      • If checked, the message will be sent to the host and port defined in the default send network connection.
      • If unchecked, pick a destination for the message to be sent from the list
    3. Connection attempts: Set the number of time the task will try to send the message to the destination. The task will fail after unsuccessful attempts.
    4. Wait for response: Check if you want to wait for an ACK message after sending the message.
    5. Select the sent message format: Messages can be sent using the HL7-ER7 or XML encoding format.
      • HL7-ER7 format is the most popular and uses the pipe (“|”) and caret (“^”) delimiters. This is the default.
    6. File path: Enter the file containing messages to be sent. Messages must be separated by the default message delimiter.
    7. Save message to file: Sent messages are stored in the execution results and in the execution reports. If needed, sent messages can be stored in file at execution time. Messages stored will be exactly what was sent with actual instantiated values for each variable.
      • If checked, the messages will be stored in the file path provided. Variables can be used to build the file path.
      • If unchecked, the messages will not be stored on disk. This is the default.

Add validation rules

If the receiving system is configured to send back message acknowledgement, each message sent would be responded to with an ACK or a NACK message. Validations can be added to the task to confirm the ACK/NACK response is as expected. Several validation types can be added.

Receive HL7 Message

Definition

A Receive HL7 Message task simulates a receiving system listening for HL7 messages on a specific TCP port. Validation can be done on the messages received.

Create a new “Receive HL7 Message” task

  1. Right-click the name of the parent Action the new task will be created in
  2. Select Add New Task –> Receive HL7 Message
  3. New Task appears under the parent Action 
  4. Edit the task name as needed

Configuring a “Receive HL7 Message” task

There are several options to control message listening.

    1. Receive connection: Default network connections are pre-configured listeners. They contain the hostname (or local IP address) to listen to, a port and a connection timeout. Using default network connections let you change the testing environment by updating the default network connection instead of modifying all tasks individually. More details about how to define network connections are available on the Options page.
      • If checked, the application will listen to the hostname and port defined in the default send network connection.
      • If unchecked, pick a receiving network connection from the list.
    2. Purge pending messages: Enable this option to receive and discard any pending messages that are waiting to be sent by the interface engine. The purge will last until the specified network connection timeout is reached without any incoming messages. Once the purged is completed, the task will start listening as usual and sending tasks will be started for the current Action. This ensures that received messages are new messages and not lingering ones stuck in the queue from a previous execution.
    3. Listen for several messages: This configuration lets the software listen to the port until it receives a set number of messages.
      • Enter the expected number of messages. At execution, once this number is reached, the test execution continues to the next task.
      • OR select the Listen until timeout option so the task will continue listening and receiving messages until the connection timeout is reached.

      In both cases, validation rules will apply to all received messages.

    4. Save received messages to file: Received messages are stored in the execution results and in the execution reports. If needed, messages can also be stored in file at execution time.
      • If checked, the messages will be stored in the file path provided. Variables can be used to build the file path.
      • If unchecked, the messages will not be stored on disk. This is the default.

Note: During test execution, Receive HL7 Message tasks will start to listen at the beginning of the parent Action so there can only be one task that listens to a specific port per Action.

Add validation rules

Validations rules can be added to confirm the received messages are as expected. Several validation types can be added.

Execute Web Service

Definition

An Execute Web Service task allows you to interact with a Web Service during a test.

Create a new “Execute Web Service” task

  1. Right-click the name of the parent Action the new task will be created in.
  2. Select Add New Task –> Execute Web Service.
  3. New Task appears under the parent Action. 
  4. Edit the task name as needed.

Configuring an “Execute Web Service” task

  1. Set timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
    • HTTP GET – Encode the specified parameters directly in the URI.Type: Specify the Web Service protocol to be used.
    • HTTP POST – Enclose the specified parameters in the message’s body.
    • REST – Representational State Transfer is a software architecture style consisting of guidelines and best practices for creating scalable web services.
    • SOAP – Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of web services in computer networks.
    • If “Use default connection” is checked, the Web Service request will be sent to the host and port defined in the default sending network connection.Host: Specify where the Web Service is located (IP address and port).
    • If “Use default connection” is unchecked, pick a destination for the Web Service request to be sent to from the list.
  2. URL: Specify the name of the Web Service request.
  3. Parameters: Using the HTTP GET/HTTP POST protocol, specify the parameters to be part of the request.
  4. Request: Using the REST protocol, specify the JSON request.
  5. SOAP Action: Using the SOAP protocol, select the action to be called. The related SOAP Envelope will be generated.
  6. SOAP Envelope: Using the SOAP protocol, specify the XML request.

 

Add validation rules

Validations rules can be added to confirm that the query result is as expected.

Query Database

Definition

A Query Database task is for querying a database and validating the result.  Examples of databases to query include a clinical application database or the internal integration engine database.

Create a new “Query Database” task

  1.  Right-click the name of the parent Action the new task will be created in
  2. Select Add New Task –> Query Database
  3. New Task appears under the parent Action.
  4. Edit the task name as needed.

Configuring a “Query Database” task

There are several options available.

  1. Set timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
    • If checked, the application will use the default database connection as defined in application options. Use default connection:  Default database connections are pre-configured data sources.  Using default database connections let you change the testing environment by updating the default database connection instead of modifying all tasks individually.  More details about how to define database connections are available in the Options page.
    • If unchecked, pick a database connection from the list
  2. Query:  Enter the query to execute on the database.  Use the Query Builder to help you building the query if needed.  To parameterize the query and make it contextual to the execution, variables can be used within the query statement.

You can retrieve HL7 or XML messages from a database and perform HL7 v2.x or XML validations. To do so, your SQL Query must return only one column (the HL7 or XML message). Then, in the Validation tab, select the appropriate Validation type.

Add validation rules

Validations rules can be added to confirm the query result is as expected. Several validation types can be added:

Read HL7 File

Definition

A Read HL7 File task simulates a receiving system listening for HL7 messages in specific files. Validation can be done on the messages received.

Create a new “Read HL7 File” task

  1. Right-click the name of the parent Action the new task will be created in
  2. Select Add New Task –> Read HL7 File
  3. New Task appears under the parent Action
  4. Edit the task name as needed

A new task is added at the end of the current action.  Drag and drop to change the task order.

Configuring a “Read HL7 File” task

There are several options to configure:

  1. Set timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate and add a pause between action executions.
  2. Read from:
    • File:
      • Filename: Specify the HL7 file to be read.
    • Directory:
      • Directory:  Specify the directory where the HL7 Files to be read are located.
        • Recursive: If checked, the task will read HL7 messages in all sub-folders of the specified directory.
      • Filename pattern:  The filename pattern will be used by the task to find which files to read from the specified directory.
        • Example: With the filename pattern “*.hl7”, the task will read each file with the extension “.hl7” contained in the specified directory.
        • If Regular Expression is checked, the filename pattern will be evaluated using the Regular Expression rules.

Add validation rules

Validations rules can be added to confirm the received messages are as expected. Several validation types can be added.

Execute Command

Definition

An Execute Command task allows you to interact with other applications during a test using command-line commands.  For instance, call a cmd script to delete files or prepare content for subsequent tasks.

Create a new “Execute Command” task

  1.  Right-click the name of the parent Action the new task will be created in
  2. Select Add New Task –> Execute Command
  3. New Task appears under the parent Action.
  4. Edit the task name as needed.

Configuring an “Execute Command” task

  1. Set timing: Specify the wait time before and after execution.  Wait times give a sending or receiving system enough time to initiate. They also add a pause between action executions.
  2. Comnand line path:  Enter the full path and file name of the application to start.  Use the browse button to start an application stored on your local computer.
  3. Arguments:  Enter any argument the application might need to start.

 

Add validation rules

Validations rules can be added to confirm that the execution result is as expected.

Execute Manual Task

Definition

Manual tasks pause the execution of the scenario and wait for a manual intervention from the user. A manual task can  be an interaction with a 3rd party application or just a way to pause the execution so extra manual validation can be done.  It’s up to the user to confirm whether the task succeeds.

Create a new Manual task

  1.  Right-click the name of the parent Action the new task will be created in.
  2. Select Add New Task –> Execute Manual Task
  3. New Task appears under the parent Action.
  4. Edit the task name as needed.

Configuring a Manual task

Manual tasks are very easy to configure.  Just enter instructions to the user explaining what to do.  The instructions will be displayed  on the screen when the scenario executes this task.  Once displayed, the execution will pause and wait for feedback from the user, based on whether the task succeeds or fails.  This feedback is integrated in the test execution report.

Validation

Each time the Manual Task is executed, a popup will be shown. From there, you can mark the task as succeeded, skipped or failed. If you set the task as failed, you can use the comment area to type what went wrong. The text will be added to the task validation errors.

Validation

Definition

Validation is the fundamental test activity. Without validation, you can’t prove that an interface works unless you bring it into production and wait for defects to emerge. Validation ensures that the interface meets requirements and behaves as expected without defects.

As a testing activity, validation is a set of rules applied to a message or a task response to verify the message or the response behaves as expected. 

Validation types

  • String Comparison: Some tasks return content using a string representation. It those cases, basic string-comparison validations can be applied on the content.
  • Database: Configure a set of rules to ensure SQL Query result conforms to expected values.
  • HL7 v2.x: Configure a set of rules to ensure that HL7 message content appears and behaves as expected.
  • XML: Configure a set of rules to ensure that XML message content appears and behaves as expected.

Add a validation

  1. Select the task validation will be added to
  2. Select the Validation tab on the right
  3. Depending on the task type, different options are possible.  Please refer to the validation type to be added.

Next

JavaScript Validation

Javascript Validation

After a task is executed, you can validate the task result with different validation types. One of them is Javascript Validation, which allows you to code multiple validation rules using Javascript.

By using the callback() method, you can notify the task when an error has occurred during one of the validations. You can provide callback() with an error message as a string.

All your Validation Rules are executed independently.

Task Validation Context

The Javascript validation context object allows you to access the task result, as well as a map that is shared between different validations in the same task.

Poperties

The context object contains the following properties.

taskResult: string

The result returned by the task. Using the task result, you can use the HL7XML, or JSON parser to parse the text result as a queryable object and build sophisticated validations with it.

var result = context.taskResult;

map: Map

A Map that is shared between different validations in the same task.

context.map.set("PID.5.1", "Smith");
callback("PID.5.1: " + context.map.get("PID.5.1"));
// PID.5.1: Smith

Map

The Map object is a collection of key-value (string-object) pairs that can be added and updated.

Methods

The Map object exposes the following methods.

set(key: string, value: object): void

Updates the key’s value to the provided value. If the key does not exist in the map, adds the key-value pair to the map.

get(key: string): object

Returns the value associated with the key in the map. If the key does not exist in the map, returns null.

map.set("PID.5", { family: "Smith", given: "John" });
callback("PID.5 Given: " + context.map.get("PID.5").given);
// PID.5 Given: John

has(key: string): bool

Returns whether or not the key exists in the map.

map.set("PID.5", { family: "Smith", given: "John" });
callback("Contains PID.5: " + map.has("PID.5"));
// Contains PID.5: true

String-Comparison Validation

Definition

Some tasks return content using a string representation. In those cases, basic string-comparison validations can be applied.

Configuration

Validations

  • [Enable/Disable]: If checked, the string-comparison will be included in the validation process.
    • Must Contain: If selected, the result string must contain the specified value.OPERATOR
    • Must Not Contain: If selected, the result must not contain the specified value.
    • Contains at least one of these: If selected, the value must contain at least one value of the many “Contain at least one of these” rules specified in the list.
  • VALUE: The string value to compare the result to.

 

Sample Result to Validate

This area contain the string representation of an execution result. The default value displayed is the latest task’s result. You can display a previous result, if available, using the Right-Click menu item “Previous Results”. You can also use this text area to add validation rules. Highlight the text you want as the VALUE for your validation, then Right-Click and select “Add Validation”.

Check-List Validation

Definition

Configure a set of rules to be validated manually by the user.

Configuration

  • [Enable/Disable]: If checked, the check-list validation will be displayed to the user when the task will be executed.
  • CHECK: Description of the validation rule to be executed.
  • RELATED DOCUMENT: (Optional). Related document helping the user to validate the rule. To do so, click the “Edit” button at the right of the cell, the browse the file you want to link.

Execution

At run-time, a dialog listing validations will be shown. Users will have to set the status for each rules, and a reason if needed.

  • CHECK: The description of the validation to be executed.
  • RELATED DOCUMENT: Related document helping the user to validate the rule. Click the document name to open it.
  • STATUS: Select the status for a validation rule. Default value is Skipped.
  • REASON: (Optional). If needed, write a note about this specific execution.

Database Validation

Definition

Configure a set of rules to ensure SQL Query results conforms to expected values.

Configuration

Validations

  • [Enable/Disable]: If checked, the database validation will be included in the validation process.
    • AND: All validation rules must be valid.AND/OR
    • OR: One of the validation rules must be valid.
  • COLUMN: Select the data-set column to compare result with the criteria.
  • [Is/Is Not]
  • OPERATORSee Data-Filter Operators  
  • CRITERIA: The string value to compare the result to.

Sample Result to Validate

This area contain the grid representation of an execution result. The default value displayed is the latest task’s result. You can display a previous result, if available, using the Right-Click menu item “Previous Results”. You can also use this area to add validation rules. Right-Click a value you want as the VALUE for your validation, then select “Add Validation”.

HL7 v2.x Validation

Definition

HL7 v2.x Validation configures a set of rules that validate message content is as expected. Rules are associated to messages fields or components.

Segment/Field Validation Rule Create

You can create your validation from an existing message, which simplifies the process, or manually.

To create from a message:

  1. Run the Action containing the task. This will add messages to the “Sample Message to Validate” area.
  2. Right-click the desired field in the message and select Add Validation.

To create manually:

  1. Click the Add button in the Validations grid.
  2. Specify the field that the validation will be applied to.

Configure

  • Modify the component or sub-component the validation will be applied to, if needed.
  • Change the operator, if needed.
  • Modify the Field# value if validation needs to occur on a specific field repetition.
  • If several rules are set, modify the and/or logical operator and parenthesis so the rule evaluation is done correctly.

You can edit the criteria by clicking on the cell to set a basic text value. In addition, you have access to the Variable Editor and the Criteria Editor which are opened by right-clicking on the criteria cell. From there you can insert a Variable or a Field Value criteria by specifying its location.

Enabling/Disabling a Validation Rule

It may be necessary to temporarily disable a validation rule so it is no longer evaluated during test execution. To disable a rule, uncheck the check box in the very first column of the Segment-Field Validation table. To re-enable it, recheck the check box to the initial state.

Advanced Mode

Repetition

In Advanced Mode, you can also select a specific field repetition to which the validation will apply.

Conditional Validation

You can use And, Or and Parentheses to perform more advanced conditions for your validations.

Next

Import/Export validation rules

Export Validation Rules

Validation rules can be exported to file so they can be reused for validation in some other tasks.  They are exported in files with .csf extension.

To export every validation rules for a task:

  1. Select the task containing validation rules to export
  2. Select Validation tab
  3. Select Segment/Field Validation tab
  4. Right-click in the table content
  5. Select Export Segment/Field Validation Rules
    The file browser window opens
  6. Give the file to be created a location and a name
  7. Click OK

     

Import Validation Rules

The same way, validation rules can be imported from file so validation rules can be reused.  By default, validation rule files have .csf extension.

To import validation rules from a file and add them to the already existing rules:

  1. Select the task to add validation rules to
  2. Select Validation tab
  3. Select Segment/Field Validation tab
  4. Right-click in the table content
  5. Select Import Segment/Field Validation Rules
    The file browser window opens
  6. Select the file to import
  7. Click OK

Operators

Data filters and operators let you define validation rules. The operators let you build filter queries, ranging from simple to complex. The most basic operator set consists of the use of “is” and “=”.

Pinpoint_Filter_Operator

These are the default operators in the Add Data Filter command, available on the right-click dropdown menu in the Last Result area.

The other data filter operators let you build sophisticated filters for analyzing HL7 data.

Operators List

OperatorAction
isIncludes messages that contain this data
is notExcludes messages that contain this data
=Covers messages with an exact match to this data (this is like putting quotation marks around a search engine query)
<Less than. Covers filtering on numeric values.
<=Less than or equal to. Covers filtering on numeric values.
>Greater than. Covers filtering on numeric values.
>=Greater than or equal to. Covers filtering on numeric values.
likeCovers messages that include this data. Covers filtering on numeric values.
presentLooks for the presence of a particular message building block (such as a segment, field, component, or sub-component)
emptyLooks for an unpopulated message building block (such as a segment, field, component, or sub-component)
inBuilds a filter on multiple data values in a message element rather than just one value.
in tableLooks if the data is in a specific table of the referenced Profile.
matching regex

Use .NET regular expression syntax to build filters. For advanced users with programming backgrounds. Learn more about regular expressions here:

This is also a quite good utility to hep you create complex regular expressions:

Message Comparison

Definition

During the validation phase, you compare transformed messages with another set of messages you already know are valid (expected message set).  The highlighted differences will indicate any issues in your code or any missing transformations.  This is a quick and easy way to validate that your code fulfills the requirements.

Create a new Message Comparison Validation Rule

  1. Run the action containing the task validation will be added to.
    This educates the application about what would be returned and would simplify the process.
  2. Select the task
  3. Select the Validation tab on the right.
  4. Provide the expected message count
  5. Select the Message Comparison tab
    As the task executed once, the Last Result section is now populated.
  6. Move the mouse over the Expected Message section
  7. Right-click and paste the message expected.  More than one message can be added.  During test execution, message received will be compared with this/those message(s).

1-on-1 Comparison

For a more detailed view of a message pair or message differences, double-click the message pair you want to compare.  Navigate through the tree view, field by field, to see the differences.

Click on the gray zone at the bottom of the screen to view more details about each difference.  Double-clicking on a grid row helps you navigate through the differences.

Include/Exclude Fields from comparison

You may want to exclude fields from the comparison so they are simply not considered in the comparison.  This allows you to ignore differences in fields you don’t need to consider.

To exclude fields from comparison:

  1. Move your mouse pointer over the field you want to exclude
  2. Right-click and select Add to Exclude Filters

 Alternatively, you can:

  1. Click the Change Filters icon (FilterEmptyicon on the upper-right corner)
  2. Make sure Exclude is selected
  3. Click Add…
  4. Change the new line that appears to the field to be excluded
  5. Repeat step #4 and #5 to exclude more fields

It can be easier to provide a list of fields to include instead of excluding a large number of fields.  The procedure is similar.  In the Filter  tab, be sure Include (instead of Exclude) is selected.

To set a large number of fields in one operation,  use the 1-on-1 message comparison screen.  For example, if you want to compare fields PID.2 to PID.13:

  1. Go to the 1-on-1 message comparison by double-clicking on a message pair
  2. Expand the PID segment so you can view all fields
  3. Select PID.2 to PID.13 holding down the SHIFT key
  4. Right-click on the selection zone and select Switch to Include Filter and Set Only This Field
  5. Close the window

The comparison will refresh using the new field set.

Hide/Show what matters

After the comparison is completed, message pairs can have one of the following statuses:

  • Changed:  Matching message found and one or more differences were found
  • Unmatched:  No matching message found
  • Identical:  Matching message found and no differences was found

On the bottom left of the screen, the  message pair count for each status is listed. 

Message pairs can be shown/hidden based on their status.  For instance, to hide identical messages:

  1. On the bottom left of the screen, select the identical message status
  2. Select Hide identical messages

Identical messages are filtered so only changed and unmatched messages are listed. 

Next

Message Conformance

Definition

The Message Conformance validation lets you compare a received HL7 message against a profile in order to flag conformance gaps.  This is useful when you need to troubleshoot data flow in a live interface where the conformance profile has been documented. 

Validations are done on:

  • Segment presence and repeatability
  • Field presence, repeatability and length
  • Component presence and length
  • Data conformance:  Validation fields value in-line with associated code set (HL7 table)

 

Enabling Message Conformance Validation

  1. You can configure the conformance profile to use in the validation process.
  2. Check the option “Validate message/ACK conforms to profile
    A new tab “Message Conformance” will appear.

A list of warnings is produced.  Each row is a broken profile conformance rule.

XML Validation

Definition

XML Validation configures a set of rules that validate that message content is as expected. Rules are associated to X-Path values.

X-Path Validation Rule

Create

You can create your validation from an existing message, which can simplify the process, or manually.

To create from a message:

  1. Run the Action containing the task. This will add messages to the “Sample Message to Validate” area.
  2. Right-click the desired field in the message and select Add Validation.

To create manually:

  1. Click the Add button in the Validations grid.
  2. Specify the X-Path that the validation will be applied to.

Configure

  • Change the operator, if needed.
  • If several rules are set, modify the and/or logical operator and parenthesis so the rule evaluation is done correctly.

You can edit the criteria by clicking on the cell to set a basic text value. In addition, you have access to the Variable Editor and the Criteria Editor, which are opened by right-clicking on the criteria cell. From there, you can insert a Variable or a Field Value criteria by specifying its location.

Enabling/Disabling a Validation Rule

It may be necessary to temporarily disable a validation rule so it is no longer evaluated during test execution. To disable a rule, uncheck the check box in the very first column of the X-Path Validation table. To re-enable it, recheck the check box to the initial state.

Advanced Mode

Conditional Validation

You can use And, Or and Parentheses to perform more advanced conditions for your validations.

Next

Message Conformance

Definition

The Message Conformance validation lets you compare a received XML message against a profile in order to flag conformance gaps.  This is useful when you need to troubleshoot data flow in a live interface where the conformance profile has been documented. 

Validations are done on:

  • XML Structure
  • Schematron Validations

 

Enabling Message Conformance Validation

  1. You can configure the conformance profile to use in the validation process.
  2. Check the option “Validate message/ACK conforms to profile
    A new tab “Message Conformance” will appear.

A list of warnings is produced.  Each row is a broken profile conformance rule.

HL7-ER7 encoding

This is the most popular representation of an HL7 message using message, segment, field, component and sub-component delimiters. This encoding is usually referred to as a “pipe delimited” message.

Example:

MSH|^~\&|MegaReg|XYZHospC|SuperOE|XYZImgCtr|20060529090131-0500||ADT^A01^ADT_A01|01052901|P|2.5 
EVN||200605290901||||200605290900 PID|||56782445^^^UAReg^PI||KLEINSAMPLE^BARRY^Q^JR||19620910|M||
    2028-9^^HL70005^RA99113^^XYZ|260 GOODWIN CREST DRIVE^^BIRMINGHAM^AL^35 209^^M~NICKELL’S PICKLES^
    10000 W 100TH AVE^BIRMINGHAM^AL^35200^^O |||||||0105I30001^^^99DEF^AN 
PV1||I|W^389^1^UABH^^^^3||||12345^MORGAN^REX^J^^^MD^0010^UAMC^L||678 90^GRAINGER^LUCY^X^^^MD^0010^UAMC^L|
    MED|||||A0||13579^POTTER^SHER MAN^T^^^MD^0010^UAMC^L|||||||||||||||||||||||||||200605290900 
OBX|1|NM|^Body Height||1.80|m^Meter^ISO+|||||F 
OBX|2|NM|^Body Weight||79|kg^Kilogram^ISO+|||||F AL1|1||^ASPIRIN DG1|1||786.50^CHEST PAIN, UNSPECIFIED^I9|||A

The other allowed encoding uses HL7-XML.

HL7-XML encoding

This is a basic XML representation of an HL7 message where XML elements represent HL7 messages constructs like segments, fields and components.

Example:

<ADT_A01>
    <MSH>
        <MSH.1>|</MSH.1>
        <MSH.2>^~\&amp;</MSH.2>
        <MSH.3>
            <MSH.3.1>MegaReg</MSH.3.1>
        </MSH.3>
        <MSH.4>
            <MSH.4.1>XYZHospC</MSH.4.1>
        </MSH.4>
        <MSH.5>
            <MSH.5.1>SuperOE</MSH.5.1>
        </MSH.5>
        <MSH.6>
            <MSH.6.1>XYZImgCtr</MSH.6.1>
        </MSH.6>
        <MSH.7>
            <MSH.7.1>20060529090131-0500</MSH.7.1>
        </MSH.7>
        <MSH.9>
            <MSH.9.1>ADT</MSH.9.1>
            <MSH.9.2>A01</MSH.9.2>
            <MSH.9.3>ADT_A01</MSH.9.3>
        </MSH.9>
        <MSH.10>
            <MSH.10.1>01052901</MSH.10.1>
        </MSH.10>
        <MSH.11>
            <MSH.11.1>P</MSH.11.1>
        </MSH.11>
        <MSH.12>
            <MSH.12.1>2.5 </MSH.12.1>
        </MSH.12>
    </MSH>
    <EVN>
        <EVN.2>
            <EVN.2.1>200605290901</EVN.2.1>
        </EVN.2>
        <EVN.6>
            <EVN.6.1>200605290900 PID</EVN.6.1>
        </EVN.6>
        <EVN.9>
            <EVN.9.1>56782445</EVN.9.1>
            <EVN.9.4>UAReg</EVN.9.4>
            <EVN.9.5>PI</EVN.9.5>
        </EVN.9>
        <EVN.11>
            <EVN.11.1>KLEINSAMPLE</EVN.11.1>
            <EVN.11.2>BARRY</EVN.11.2>
            <EVN.11.3>Q</EVN.11.3>
            <EVN.11.4>JR</EVN.11.4>
        </EVN.11>
        <EVN.13>
            <EVN.13.1>19620910</EVN.13.1>
        </EVN.13>
        <EVN.14>
            <EVN.14.1>M</EVN.14.1>
        </EVN.14>
        <EVN.16>
            <EVN.16.1>2028-9</EVN.16.1>
            <EVN.16.3>HL70005</EVN.16.3>
            <EVN.16.4>RA99113</EVN.16.4>
            <EVN.16.6>XYZ</EVN.16.6>
        </EVN.16>
        <EVN.17>
            <EVN.17.1>260 GOODWIN CREST DRIVE</EVN.17.1>
            <EVN.17.3>BIRMINGHAM</EVN.17.3>
            <EVN.17.4>AL</EVN.17.4>
            <EVN.17.5>35 209</EVN.17.5>
            <EVN.17.7>M~NICKELL’S PICKLES</EVN.17.7>
            <EVN.17.8>10000 W 100TH AVE</EVN.17.8>
            <EVN.17.9>BIRMINGHAM</EVN.17.9>
            <EVN.17.10>AL</EVN.17.10>
            <EVN.17.11>35200</EVN.17.11>
            <EVN.17.13>O </EVN.17.13>
        </EVN.17>
        <EVN.24>
            <EVN.24.1>0105I30001</EVN.24.1>
            <EVN.24.2/>
            <EVN.24.3/>
            <EVN.24.4>99DEF</EVN.24.4>
            <EVN.24.5>AN </EVN.24.5>
        </EVN.24>
    </EVN>
    <PV1>
        <PV1.2>
            <PV1.2.1>I</PV1.2.1>
        </PV1.2>
        <PV1.3>
            <PV1.3.1>W</PV1.3.1>
            <PV1.3.2>389</PV1.3.2>
            <PV1.3.3>1</PV1.3.3>
            <PV1.3.4>UABH</PV1.3.4>
            <PV1.3.8>3</PV1.3.8>
        </PV1.3>
        <PV1.7>
            <PV1.7.1>12345</PV1.7.1>
            <PV1.7.2>MORGAN</PV1.7.2>
            <PV1.7.3>REX</PV1.7.3>
            <PV1.7.4>J</PV1.7.4>
            <PV1.7.7>MD</PV1.7.7>
            <PV1.7.8>0010</PV1.7.8>
            <PV1.7.9>UAMC</PV1.7.9>
            <PV1.7.10>L</PV1.7.10>
        </PV1.7>
        <PV1.9>
            <PV1.9.1>678 90</PV1.9.1>
            <PV1.9.2>GRAINGER</PV1.9.2>
            <PV1.9.3>LUCY</PV1.9.3>
            <PV1.9.4>X</PV1.9.4>
            <PV1.9.7>MD</PV1.9.7>
            <PV1.9.8>0010</PV1.9.8>
            <PV1.9.9>UAMC</PV1.9.9>
            <PV1.9.10>L</PV1.9.10>
        </PV1.9>
        <PV1.10>
            <PV1.10.1>MED</PV1.10.1>
        </PV1.10>
        <PV1.15>
            <PV1.15.1>A0</PV1.15.1>
        </PV1.15>
        <PV1.17>
            <PV1.17.1>13579</PV1.17.1>
            <PV1.17.2>POTTER</PV1.17.2>
            <PV1.17.3>SHER MAN</PV1.17.3>
            <PV1.17.4>T</PV1.17.4>
            <PV1.17.7>MD</PV1.17.7>
            <PV1.17.8>0010</PV1.17.8>
            <PV1.17.9>UAMC</PV1.17.9>
            <PV1.17.10>L</PV1.17.10>
        </PV1.17>
        <PV1.44>
            <PV1.44.1>200605290900 </PV1.44.1>
        </PV1.44>
    </PV1>
    <OBX>
        <OBX.1>
            <OBX.1.1>1</OBX.1.1>
        </OBX.1>
        <OBX.2>
            <OBX.2.1>NM</OBX.2.1>
        </OBX.2>
        <OBX.3>
            <OBX.3.2>Body Height</OBX.3.2>
        </OBX.3>
        <OBX.5>
            <OBX.5.1>1.80</OBX.5.1>
        </OBX.5>
        <OBX.6>
            <OBX.6.1>m</OBX.6.1>
            <OBX.6.2>Meter</OBX.6.2>
            <OBX.6.3>ISO+</OBX.6.3>
        </OBX.6>
        <OBX.11>
            <OBX.11.1>F </OBX.11.1>
        </OBX.11>
    </OBX>
    <OBX>
        <OBX.1>
            <OBX.1.1>2</OBX.1.1>
        </OBX.1>
        <OBX.2>
            <OBX.2.1>NM</OBX.2.1>
        </OBX.2>
        <OBX.3>
            <OBX.3.2>Body Weight</OBX.3.2>
        </OBX.3>
        <OBX.5>
            <OBX.5.1>79</OBX.5.1>
        </OBX.5>
        <OBX.6>
            <OBX.6.1>kg</OBX.6.1>
            <OBX.6.2>Kilogram</OBX.6.2>
            <OBX.6.3>ISO+</OBX.6.3>
        </OBX.6>
        <OBX.11>
            <OBX.11.1>F AL1</OBX.11.1>
        </OBX.11>
        <OBX.12>
            <OBX.12.1>1</OBX.12.1>
        </OBX.12>
        <OBX.13/>
        <OBX.14>
            <OBX.14.2>ASPIRIN DG1</OBX.14.2>
        </OBX.14>
        <OBX.15>
            <OBX.15.1>1</OBX.15.1>
        </OBX.15>
        <OBX.17>
            <OBX.17.1>786.50</OBX.17.1>
            <OBX.17.2>CHEST PAIN, UNSPECIFIED</OBX.17.2>
            <OBX.17.3>I9</OBX.17.3>
        </OBX.17>
        <OBX.20>
            <OBX.20.1>A</OBX.20.1>
        </OBX.20>
    </OBX>
</ADT_A01>

Variables

Definition

Variables are symbolic names to which a value can be assigned.  Variables can be used to:

  • Populate HL7 message fields
  • Create dynamic file paths
  • Create dynamic SQL queries
  • Validate tasks

Variable are in the ${variable_name} format

There are 2 variable types:

  1. System variables:  They are variables managed automatically by the application
  2. User-defined variables:  Variables managed by the user building the test scenarios

System Variables

How to Use System Variables

System variables are quite useful to get contextual information regarding the suite execution.  This variables can be used to improve tasks reusability and speed up test definition.  Use them to build:

  • Validations
  • Messages
  • Paths and file names
  • Documentation

Here is the list of system variables:

Variable Name    Description
${CxScenarioSuiteName}    Name of the Scenario Suite
${CxScenarioName}    Name of the task’s parent Scenario
${CxScenarioIteration}    Current running iteration number for the Scenario
${CxActionName}    Name of the task’s parent Action
${CxActionIteration}    Current running iteration number for the Action
${CxTaskName}    Name of the Task
${CxToday}  The current Date
${CxNow}    The current Date and Time

Using system variables, the last inbound and outbound messages are also accessible:  [Deprecated] – Use Criteria Editor

Variable (including example)
    Description
${CxLastOutboundMessage[%FIELD%]}

 

${CxLastOutboundMessage[%MSH.3%]}    Returns content of MSH.3 from the last outbound message (last message sent)
 ${CxLastOutboundMessage[%OBX[2].5[3]%]}

    OBX segment and OBX.5 being both repeatable, it returns content of 3rd repetition of OBX.5 in the 2nd OBX segment of the last outbound message

  
${CxLastInboundMessage[%FIELD%]} 
 ${CxLastInboundMessage[%PID.3%]}    Returns content of PID.3 from the last inbound message (last message received)
${CxLastInboundMessage[%PID.3[3].4%]}    PID.3 being repeatable, this expression returns content of the 4th component of the 3rd repetition of PID.3.

 

User-Defined Variables

Definition

User-defined variables are variables managed by the test scenario builder.  Variables allow the application to create message content and field values at run time, so that you can perform tests without having to create multiple messages yourself.  Values assigned to user-defined variables are managed by generators.

Create a new variable

  1. Click the “Edit Variables” link at the right side of the window.
  2. Click Add…  A new row is added to the grid.
  3. Give it a name.
  4. Select the variable type.
Variable Type Name    Description
String    A set of characters
Char    A single character
Boolean    True or False
Int    Number between -2,147,483,648 and 2,147,483,647
Long   Number between –9,223,372,036,854,775,808 and 9,223,372,036,854,775,807
Double     A 15 digit number between ±5.0 × 10−324 and ±1.7 × 10308
Date Time   Calendar date between January 1, 0001 and December 31, 9999
Mapping Table   A 2-column table where each row contains an initial value and its equivalent mapping value
Environment Variable    A set of values for which the used value is determined by the active environment.
  1. Configure the generator.  Generators refer to the data sources used to set values to your variable.
Generator    Recommended Use
Boolean    Insert a Boolean value (true or false).
Date Time    Insert a randomly generated date-time value. You can set the range, time unit, format, and other parameters.
Directory Listing    Iterate through files in a specified directory.
Excel File    Pull random data from an Excel 2007+ spreadsheet — for instance, a list of names, addresses, and cities.
Numeric    Insert a randomly generated number. You can set the length, decimals and other parameters.
SQL Query    Pull data from a database based on an SQL query. You’ll be able to configure a database connection.
String    Insert a randomly generated string or static value. You can set the length and other parameters.
Substring    Insert a part of another variable.
Table    Pull data from HL7-related tables stored in one of your profiles, useful for coded fields.
Text FilePull random data from a text file — for instance, a list of names. Several file formats can be used: txt, csv, etc
Environment Variable    Map a given value to specific, user-defined environments, such as Development, Production or Local.

Note: Advanced Mode allows you to combine several generators to generate complex value formats. For instance, a patient ID with the format XXX-9999-M can be generated by combining Excel, numeric and string generators.

Generators

Definition

Generators are algorithms or data sources used to assign variables with values.  Several generators are available:

  • Basic data:  These generators generate data based on the data type they represent.  Use parameters to control the value that is generated.
  • Profile-based:  These generators retrieve data from a selected profile.  Profiles contain tables for code sets and other structures specifying message format and content.  If you must ensure that test messages are built with valid (or constrained) data, this generator is critical.
  • Record-based:  Those generators retrieve data rows from different data sources. During test execution, all variables that use the same generator, use the same record.
  • Environment-based: These generators retrieve data in a way that’s dependent on the current active environment. This can be useful for rapidly switching the parameters of a test based on what environment you’re testing.

Advanced Mode

Combining Generators

In Advanced Mode, you can generate data with complex data formats by combining generators for a single variable.  For instance, a patient ID with the format XXX9999M (3 random characters, a number between 0000 and 9999 plus a static character at the end) can be generated by combining Excel, numeric, and string generators.

To combine generators:

  1. Click the Advanced Mode link in the generator section.
  2. Click the Add link that appears.
  3. Configure the newly created generator.
  4. Redo steps #2 and #3 if needed.

Change the generator order by dragging and dropping them in the generator chain.

Generator Formatting

Use the Generator formatting field to add more formatting. You can create sophisticated values that mimic unstructured data using this functionality. Formatting can be quite powerful.

GeneratorFormattingGenerated ValueDescription
Numeric 0-99He is {0} years oldHe is 34 years old
He is 17 years old
He is 88 years old
{0} is replaced with the generated value
Numeric 0-99{0} + {0} = 2*{0}34 + 34 = 2*34
17 + 17 = 2*17
88 + 88 = 2*88
A generator can be used several times
Numeric 0-99{0:D5}00042
93277
03007
15432
Adding leading zeros so the values has 5 digits
String (length=1)
Numeric 0-99999
{0} – {1}P – 22
C – 42
I – 1
L – 82
Generators are combined and formatting is added
Excel (first name)
Excel (last name)
{1}^{0}Doe^John
Smith^Suzan
Generators are combined to create a field value having 2 components (subfields)

Boolean

This generator creates a Boolean (True or False) value.

How to use the Boolean generator

  • Random values
    • Generate True or False value randomly.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of True, False, True, False, True, etc.
    • Start new list. Always start the sequence with True.
    • Continue from previous list. If you run the test and it ends with True, the next time it will start with False.
Example #1:    Generated Values
  • Random values
  •  
  • Include random blanks: unchecked
    True
    True
    False
    True
    False

Environment Variable

This generator uses user-defined environments and allows you to map values specific to those environments for a given variable. This allows for efficient re-use of tests that are based on different development environments (Development, Production, etc.)

How to use the environment variable generator

To use this generator, you first need to define environments to which you will map the variables. To do so, open the environment editor.

Open Environment Editor

This will create default environments to work in. You can modify or delete these environments, and you can define your own environments if you want.

Now, you can create a variable of type Environment Variable and define it with the Environment Variable value generator.

Create Environment Variable

To make use of this variable, you need to assign values to existing environments in the value generator.

Assign Values to Environments

Finally, select an environment in which you run the scenario suite.

Select Environment

In this case, running with the Development environment will assign the value mysite.dev.mydomain.com to the ${HL7ConnectorUrl} variable.

Date Time

This generator creates date and time values.

How to use the “Date time” generator

  • Random values
    • Randomly generate values in a range between minimum and maximum limits of time unit (second, minute, etc.)
    • Based on:
      • Now. Uses the current date time as a reference.
      • Actual field value. Uses the date time value from the field in the original message.
      • A specific date. You specify a date and time to use as a reference.
    • Date format. Set the format of the new date-time. Note that you have a choice of formats. You can also enter your own format manually.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of date-time values, for instance: 2013-12-12, 2013-12-13, 2013-12-14, 2013-12-15, etc.
    • Based on:
      • Now. Uses the current date time as a reference.
      • Actual field value. Uses the date time value from the field in the original message.
      • A specific date. You specify a date and time to use as a reference.
    • Date format. Set the format of the new date-time. Note that you have a choice of formats. You can also enter your own format manually.
    • Increment by. The interval to use between each value. You can use a negative value and set a time unit (second, minute, etc.)
    • Start new line. Always start with the minimum limit or the maximum limit if you’re using a negative increment.
    • Continue from previous list. If you run the test suite and it ends with 2013-12-13, next time, it will start with 2013-12-14.

Excel File

This generator pulls data from an Excel 2007+ file (*.xlsx).

How to configure the generator to use Excel file content

  • Random values
    • Randomly generate values from an Excel file.
    • File. Specify the source of the Excel file. Use the Browse… button to select a file.
    • Worksheet. Specify the Worksheet to use.
    • Column. Specify the column to use.
    • First/Last rows. Specify the rows to get data.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of values from an Excel file starting with the first row.
    • File. Specify the source of the Excel file. Use the Browse… button to select a file.
    • Worksheet. Specify the Worksheet to use.
    • Column. Specify the column to use.
    • First/Last rows. Specify the rows to get data.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Start new line. Always start with the first row in the Excel file.
    • Continue from previous list. If you run a test and it ends with the 13th entry, the next time, the test will start with the 14th entry.

Note: If more than one field is configured using the same worksheet, the same row will be applied across a message. In other words, you can use an Excel file to ensure that several values will be used together. This is useful when you need to link a city with a zip code or a first name with a gender.

The examples below use the following content from a file named C:\MyDocuments\myExcelFile.xlsx

Numeric

This generator creates a number.

How to use the “Numeric” generator

  • Random values
    • Randomly generate values between minimum and maximum limits.
    • Decimal. Set the number of places after the decimal point. Example #2 will generate value with 2 decimals (3.75).
    • Include random blanks. Including random blanks generates empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of 0, 1, 2, 3, etc.
    • Decimal. Set the number of places after the decimal point. Example #2 will generate value with 2 decimals (3.75).
    • Increment by. The step or interval to use between each value. You can use a negative value.
    • Start new list. Always start with the minimum limit or the maximum limit if you’re using a negative increment.
    • Continue from previous list. If you run the test and it ends with 13, the next time it will start with 14.

SQL Query

This generator pulls data from an SQL-accessible database.

How to configure this generator to use SQL query results as test values

  • Select a database connection. If no database connections are configured, click Connections… to set up a connection.
  • Enter the SQL query. You can use the embedded Query Builder to help you build the query.
  • Restrict to values between. Will only use values that are within the specified limits.
  • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.

String

This generator creates a uppercase character string to be used to set a static value.

How to use the “String” generator

  • Check the Random option.
  • Set the minimum length of the strings you want to generate.The minimum value for this configuration is 0. A string with a length of 0 is equivalent to an empty string.
  • Set the maximum length of the strings you want to generate.
  • Include random blanks. Including random blanks generates empty strings among the values for use in the field or data type.

How to use the “String” generator to set a static value:

  • Check the Static option.
  • Set the static value to be inserted.

This generator creates a uppercase character string to be used to set a static value.

How to use the “String” generator

  • Check the Random option.
  • Set the minimum length of the strings you want to generate.The minimum value for this configuration is 0. A string with a length of 0 is equivalent to an empty string.
  • Set the maximum length of the strings you want to generate.
  • Include random blanks. Including random blanks generates empty strings among the values for use in the field or data type.

How to use the “String” generator to set a static value:

  • Check the Static option.
  • Set the static value to be inserted.

Substring

This generator retrieves a part of another variable value.

How to use the “Substring” generator

  • Select the pre-defined variable that the value will be extracted from.
  • Specify where the substring starts (first or any other character)
  • Specifiy where the substring ends (last or any other character)
  • Include random blanks. Including random blanks generates empty strings among the values for use in the field or data type.

The following examples use a pre-defined variable:

  • Variable name:  ${ReceivingFacility}
  • Variable type:  String
  • Variable static value:  FacilityA

Directory Listing

This generator lists files in a directory, where the name of the files match a specified pattern.

How to configure this generator

  • Directory: Select the directory to list files.
  • [Recursive]: If checked, the generator will list every file in the directory hierarchy.
  • Filename pattern: (Optional) Filenames shall match the specified pattern to be included in the list.
  • [Regular Expression]: If checked, the filename pattern will be handled as a regular expression.

Table

This generator pulls data from HL7-related tables stored in a profile. Read how to set the profile.

How to configure the generator to use the appropriate HL7 table

  • Random values
    • Randomly generate values from an HL7 table.
    • Source. Select the profile containing the table.
    • Table. Select a table from which the value will be generated.
    • To access the table content, click on the Edit Table button. If you change the table content, the new table content will appear in the profile you
      select.
    • Restrict to values between. Will only use table entries that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of value starting with the first table entry.
    • Source. Select the profile containing the table.
    • Table. Select a table from which the value will be generated.
    • To access the table content, click on the Edit Table button. If you change the table content, the new table content will appear in the profile you
      select.
    • Restrict to values between. Will only use table entries that are within the specified limits.
    • Start new line. Always start with the first entry of the table.
    • Continue from previous list. If you run the test and it ends with the 13th entry, the next time it will start with the 14th.

Text File

This generator pulls data from a text file (*.txt, *.csv, etc).

How to configure this generator to use text file content

  • Random values
    • Randomly generate values from a text file.
    • File. Specify the source of the text file. Use the Browse… button to select a file.
    • Column. Specify the column id to use (in case of a character delimited file, ex: *.csv)
    • Column delimiter. The character that separates each column in the text file.
    • First/Last rows. Specify the rows to get data.
    • Between character position. Will only use characters that are within the specified positions.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Include random blanks. Allowing random blanks will mean that you generate empty strings among the values for use in the field or data type.
  • Sequential list
    • Generate a sequence of value from a text file starting with the first row.
    • File. Specify the source of the text file. Use the Browse… button to select a file.
    • Column. Specify the column id to use (in case of a character delimited file, ex: *.csv)
    • Column delimiter. The character that separates each column in the text file.
    • First/Last rows. Specify the rows to get data.
    • Between character position. Will only use characters that are within the specified positions.
    • Restrict to values between. Will only use values that are within the specified limits.
    • Start new line. Always start with the first row in the text file.
    • Continue from previous list. If you run the De-Identication and it ends with the 13th entry, next time, it will start with the 14th one.

Note: If more than one field is configured using the same text file, the same line will be used within the same message. In other words, you can use a text file to ensure several values will be used together. This can be useful when linking a a city with a zip code or a first name with a gender.

The examples below use the following content in a file C:\MyDocuments\myFile.txt

Criteria Editor

Definition

The Criteria Editor is used to construct string-value using Carlang expressions.

Carlang

Carlang is an excel-like function language. With Carlang, you are able to retrieve HL7/XML/JSON/DataSet values from a specified task/field. Currently, there are some functions available:

@ConvertDateTime(“DATE_TIME_TO_CONVERT”, “SOURCE_FORMAT”, “DESTINATION_FORMAT”)

This function is used to convert a date value from any executed task in the scenario suite. The function has 3 parameters.

  • DATE_TIME_TO_CONVERT: The date the user wish to convert.
  • SOURCE_FORMAT: The format of the date before the conversion. (Can use the “HL7” and “FHIR” constant or any format from the DateTime c# .net documentation)
  • DESTINATION_FORMAT: The target format of the date after the conversion. (Can use the HL7 an “FHIR” constant or any format from the DateTime c# .net documentation)

EX: @ConvertDateTime(“20200428011122-0500”, “HL7”, “FHIR”) 🠖 2020-04-28T01:11:22-05:00
@ConvertDateTime(“2021-04-20”, “yyyy-MM-dd”, “MM-dd-yyyy”) 🠖 04-20-2021

@DataSet(“TASK_PATH”[9],”COLUMN”,”ROW”,”TABLE”)

This function is used to retrieve a value from an SQL Query result-set. The function has 4 parameters:

  • TASK_PATH: (Optional) The task containing the source SQL Query result-set. If not specified, the current task will be used.
    • [9]: (Optional) Use this parameter to specify which iteration of the specified task from which to get data.
  • COLUMN: The name/index of the column in the result-set containing the data.
  • ROW: (Optional) The row index in the result-set containing the data.
  • TABLE: (Optional) The name/index of the table in the result-set containing the data.

@EncodeBase64(“VALUE_TO_CONVERT”)

This function is used to encode a raw value to a base64 value from any executed task in the scenario suite. The function has 1 parameter.

  • STRING_TO_CONVERT: The value the user wish to convert.

@HL7(“TASK_PATH”[R9],”HL7_FIELD”)

This function is used to retrieve an HL7 field value from any executed task in the scenario suite. The function has 2 parameters.

  • TASK_PATH: The task containing the source HL7 message. If not specified, the current task will be used.
    • [R]: (Optional) Use this parameter to get the value from the task’s result message (ACK).
    • [9]: (Optional) Use this parameter to get the value from the task’s ninth message (if your task have many messages). Message index is a 1-based index.
  • HL7_FIELD: The HL7 Field to retrieve value from. Ex: PID.3.1, PID[2].3.1.

Hl7 Field syntax is SEGMENT_NAME[SEGMENT_REPETITION].FIELD_POSITION[FIELD_REPETITION].COMPONENT_POSITION.SUB_COMPONENT_POSITION

@JSON(“TASK_PATH”,”JSON_PATH”)

This function is used to retrieve a JSON-Path value from any executed task in the scenario suite. The function has 2 parameters.

  • TASK_PATH: The task containing the source JSON resource. If not specified, the current task will be used.
  • JSON_PATH: The JSON-Path to retrieve value from: Ex: ‘$.name[*].family’.

@XML(“TASK_PATH”[R],”X_PATH”)

This function is used to retrieve an X-Path value from any executed task in the scenario suite. The function has 2 parameters.

  • TASK_PATH: The task containing the source XML message. If not specified, the current task will be used.
    • [R]: (Optional) Use this parameter to get the value from the task’s result message (ACK).
  • X_PATH: The X-Path to retrieve value from: Ex: ClinicalDocument/typeId/@extension.

@Value(“TASK_PATH”[R])

This function is used to retrieve a string value from any executed task in the scenario suite. The function has 1 parameter.

  • TASK_PATH: The task containing the source value. If not specified, the current task will be used.
    • [R]: (Optional) Use this parameter to get the value from the task’s result value.

Configuration

HL7 v2.x Field

  • Click on the ‘Browse…’ button to select the Task containing the value or leave it empty to use the current task.
  • Check the ‘Get field from ACK’ if you which to get the value from the ACK message instead of the HL7 message sent/received.
  • If more than one message is present and you wish to compare against a specific message, you can set the index. If an index is not specified then all messages are matched 1-on-1 for validation. The first message of the source Task will be compared to the first message of the compared Task, the second with the second, etc.
  • Specify which element you wish to validate by setting the Segment, Field, Component and Sub-Component as needed.
  • By default, the first instance found of a Segment or Field will be used for the validation. If needed, you can specify which repetition to use by setting the Seg # and/or Field #.

XML Field

  • Click on the ‘Browse…’ button to select the Task containing the value or leave it empty to use the current task.
  • Select the X-Path to be used.

JSON Field

  • Click on the ‘Browse…’ button to select the Task containing the value or leave it empty to use the current task.
  • Select the JSON-Path to be used.

 DataSet Field

  • Click on the ‘Browse… button’ to select the Task containing the value or leave it empty to use the current task.
  • (Optional) Select the task iteration.
  • Select the column name.
  • (Optional) select the row index.
  • (Optional) Select the table from the result-set (if the SQL Query contains more than one SELECT).

When finished, click on insert to add it to the criteria. You can then insert another Field Value or text. When you are done editing, click on Apply to close the editor and apply the changes.

Advanced Criteria Example

To check that PID.2 and PID.4 of a sending task named “Send Task 1”, have been properly merged and separated by a dash in the Z01.1 field of the current task:

  1. Create the following validation: Z01.1 is = _
  2. Set the criteria to: @HL7(“/ScenarioName/ActionName/Send Task 1″,”PID.2”)-@HL7(“/ScenarioName/ActionName/Send Task 1″,”PID.4″)

So, if PID.2 is “ABC” and PID.4 is “123”, then the runtime validation would be: Z01.1 is = ABC-123

Executing Tests

How to Run Tests

Right-click the name of the Scenario suite, the Scenario, the Action or the Task you wish to execute. Click Run.

You can stop a test mid-way or at any time. Simply right-click on a node and select Stop.

Generate an execution report

After a test is executed, you can generate an execution report:

  1. In the main menu, click FILE –> Save Execution Report…
  2. Give the report a location and a file name.
  3. Click OK.
    The report is saved and opens.

The generated report is an Excel document containing descriptions of the test and all results.

  • Summary Worksheet:  The summary worksheet contains counts for all execution statuses.
  • Execution Details:  This worksheet contains the configuration and the results of all tasks executed.

Run tests using the command line application

You can also run your Scenario Suite using the command line application (TestConsole.exe) located in the Test installation folder (%PROGRAMFILES(X86)%\Caristix\Caristix Test or %PROGRAMFILES%\Caristix\Caristix Test). Simply call the application by providing the Scenario Suite to run in argument:

TestConsole.exe “C:\MyScenarioSuite.cxs”

Use TestConsole.exe -h for more information.

Creating Test Messages

Message Maker in Caristix Test Software

Use the Message Maker tool to create test messages to PLACE INTO a scenario or to copy to another application. The messages you generate will be based on a specific profile (an HL7 version based on the reference standard, or a profile created in Caristix Conformance or Caristix Workgroup software).

Message Maker vs. variables

In most of your test automation work, you will want to use variables to populate test workflow with data. But if you need to generate HL7 messages to copy to another application, use Message Maker. Also use Message Maker if you want to use the same test data over and over again in a test scenario created with Caristix software.

How to create messages with Message Maker

  1. In the main menu, click Tools, Message Maker. The Message Maker dialog box appears.
  2. In the Conformance Profile dropdown list, select a profile to base the message on. If you use Caristix Conformance/Workgroup, the list includes the profiles you’ve created in the application (in addition to the standard HL7 reference profiles).
  3. Expand the tree view on the message type you need.
    Test_MessageMaker
  4. Double-click an event, and the Messages tab automatically populates with a message based on data contained in the Caristix data dictionary.
  5. Navigate the tree view to add as many messages as needed.
  6. To save messages to a .txt file, click File, Save Results.
  7. To close the Message Maker tool, click the OK button at the bottom of the screen.

Options

Options in Caristix Test

Before starting to use Caristix Test, review Options to ensure your setup is appropriate for your testing and validation.

From the Main Menu, click Tools, then Options in the drop-down menu that appears.

A new Options window opens. 4 tabs are available: LoggingReference ProfileDefault Connections and Preferences

Logging

Enabling this configuration activates internal execution log storage.  Internal execution logs are actually xml files and can be open as a test suite so the test can be run again using the exact same configuration, meaning that variables are replaced with the actual values generated at run time.  

  • Log Execution: This is the storage location.  If the location varies from the default, Browse to the location required and enter the file name.
  • If you uncheck the Log Execution box, you will not generate internal execution reports. We recommend leaving this box checked for full product functionality.

Reference Profile

This is the default profile used to validate and create new messages.  Reference conformance profiles based on the HL7 standard are located here. Also, any other profile the organization may have created would be listed here too. 

To know more about how to create new customized profiles (including Z-segments and customized fields), refer to the Caristix Conformance or Caristix Workgroup products.

Default Connections

This is where connections to integration engines (or other HL7 systems) and databases are configured.  Configuring a default connection for each category has a few advantages:

  • It makes it faster to configure tasks
  • It allows to change the environment tests will run on just updating the default connections

Database Connections

Caristix Test can perform tasks against a database. For instance, you can execute a SQL query to validate against expected results; or you can instantiate a variable from a data set. These settings enable you to set up database connection library and select a default database.

  • Select a default connection from the dropdown list.
  • To change the list of database connections, click the Connections button.
  • Fill out Database Connections dialog box:
    Test_DatabaseConnection
  • To add a new connection, click New.
  • Edit the connection name as needed.
  • Choose the database type.
  • Fill corresponding connection informations.
  • Click the Test button to test the database connection.
  • To delete a connection, click Delete.
  • Click OK to save changes.

 

Inbound HL7 Connections

Caristix Test can interact with an integration engine or a system sending HL7 messages.  These settings enable you to set up inbound network connection library and select one as the default.

Choose the default inbound network connection from the list of network connections.  To configure a new network connection:

  1. Click Network Connections…
    The Network Connection dialog opens
  2. Click Add…
  3. Enter connection details
  1. Click OK

 

Outbound HL7 Connections

Caristix Test can interact with an integration engine or a system receiving HL7 messages.  These settings enable you to set up outbound network connection library and select one as the default.

Choose the default outbound network connection from the list of network connections.  To configure a new network connection:

  1. Click Network Connections…
    The Network Connection dialog opens
  2. Click Add…
  3. Enter connection details
    1. Click OK

    Preferences

    Check for updates upon startup.

    • Every time you start Test, the software will check for available updates. You can manually check for updates by going to  HelpCheck for Updates

    Show tips

    • This will display information boxes that will provide guidance on Test features. If you want to hide a tip permanently, you can click the close button. Restore all hidden tips with the “Reset hidden tips” link.

Examples

There is a lot of test automation power under the hood with Caristix Workgroup. Looking for examples to get started with the application? Here are a few to illustrate what to do and how to do it.

Feel free to contact us if you are looking for more How To articles that are not included here. We love hearing from our users. The best way to reach us is: support@caristix.com.

Tutorials

Some tutorials to help you with some common tasks.

Tips & Tricks

Some others useful topics.

 

Segment Field Validation

These examples walk you through a series of typical validation activities.

Generate Test Messages

This tutorial shows you how to create test messages using Caristix software.

Generate Test Messages: when to use this tutorial

During interface coding or validation, you often need a set of sample messages.  But there are times when the source or destination system hasn’t been deployed or upgraded, and it’s impossible to obtain real-world sample messages from the vendor. In these cases, the solution would be to create the messages yourself.

But the problem is that manually building a large set of sample messages (>50) is time-consuming and resource-intensive for busy teams. Sometimes you simply can’t build 50+ sample messages manually.

This tutorial explains how to generate a large amount of messages (>100,000) easily and quickly.

Overview

The process is straightforward.  First, create a suite with two tasks.  The first task will include all the configuration information needed to populate a message template from data sources. It will send the message to the second task.  This second task will take the message and save it to a file.  To generate multiple messages, those tasks just need to run multiple times. This tutorial will create 100 messages for you.

Here is a step-by-step explanation.

You can also download the test suite and use it to walk through this tutorial.

Step #1:  Create a suite

  1. Create a test suite:

    For the purposes of this tutorial, name the suite Caristix Test Tutorial

  2. Create a scenario:

    Name the scenario How To

  3. Add an action:

    Name the action Generate messages

  4. Create a “Send HL7 Message” task:

    Call this new task Generate A01 messages

  5. Create a “Receive HL7 Message” task:

    Call it Receive generated messages

Step #2:  Configure Message Generation Parameters

In this step, you’ll configure the message template and the data sources to populate the template.

  1. Configure the Generate A01 messages task

    • Select the Generate A01 messages task
    • Select the Configuration tab
    • Configure an outbound connection where host is 127.0.0.1 and port is 6661.  Set timeout to 30 seconds. 
    • Select it as the Outbound connection
  2. Get a message template

    • If you have a single message, paste it in the message zone.  This becomes the message template.  Generated messages will be based on this message.

    – OR –

    • Click the Generate message from Profile… button
    • Select the HL7 v2.6 conformance profile from the profile library.  You will generate HL7 v2.6 messages using this profile.
    • Select the ADT-A01 trigger event. You’ll be creating admit messages.
    • Click OK

       

  3. Configure fields

    Now you’re going to set up variables for several fields such as the date and time of the message, patient name, patient date of birth, etc. These fields need to be linked to a data source so that during execution, the fields are populated with different data, so you get different messages.  Data sources can be Excel files, text files, databases or built-in data generators.

    • MSH.7 – Date/Time of Message
      For this field, configure a variable using a date time generator.  The MSH.7 field value will be replaced with a variable ${CurrentDateTime}.  During execution, this variable will be replaced with the current date and time.
      • Move the mouse over the MSH.7 field
      • Right-click the field and select Set Variable for MSH.7
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Rename to ${CurrentDateTime}
      • Set TYPE to String
      • Under Configuration, set Type to Date Time
      • Under Based on, select  Now
      • Select Date format: yyyyMMddHHmmss
      • Click OK
    • MSH.10 – Message Control ID
      Now configure the MSH.10 field value, so it is replaced with a variable ${MsgControlID}. At run time, this variable will be replaced with a generated number.
      • Move the mouse over the MSH.10 field
      • Right-click the field and select Set Variable for MSH.10
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Rename to ${MsgControlID}
      • Set TYPE property to String
      • In Configuration, set Type to Numeric
      • Select Generate sequential list so the numbers you generate increment for every message generated.
      • Set Between 10000000 and 99999999 so generated numbers will start from 10000000 to 99999999
      • Select Decimals: 0
      • Select Increment by: 1
      • Select Start new list
      • Click OK
    • EVN.2 – Recorded Date/Time
      For EVN.2, we will reuse a variable we configured previously.  At run time, field value will be replaced with the ${CurrentDateTime} variable.  Retrieve it from the variable list:
      • Move the mouse over the EVN.2 field
      • Right-click the field and select Set Variable for EVN.2
        The variable window opens
      • Select the ${CurrentDateTime} variable
      • Click OK
    • PID.2 – Patient ID
      Using the same technique, create a new variable for PID.2. This time, the variable will generate numerics leaded with a string and 0s to make sure the new field values are 8 characters long
      • Move the mouse over the PID.2 field
      • Right-click the field and select Set Variable for PID.2
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${PatientID}
      • Set TYPE property to String
      • In Configuration, set Type to Numeric
      • Select Generate random value
      • Set Between 1 and 999999
      • Select Decimals: 0
      • Select Increment by: 1
      • Set Generator formatting: ID{0:D6} so the number generated is prefixed with “ID” and “0” up to 6 digits long number
      • Click OK
    • PID.5.1 – Patient Family Name
      PID.5.1 will be populated with names stored in an Excel file. The file is provided with the product. Start by configuring the ${PatientLastName} variable:
      • Move the mouse over the PID.5.1 field
      • Right-click the field and select Set Variable for PID.5.1
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${PatientLastName}
      • Set TYPE property to String
      • In Configuration, set Type to Excel File
      • Select Generate sequential list so Excel file content is selected from top to bottom
      • Set File: C:ProgramDataCaristixCommonSamplesExcelPatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: B
      • Set Start new list so it starts with the top row of the Excel file at each execution
      • Click OK
    • PID.5.2 – Patient Given Name
      Do the same thing for PID5.2 as we did for PID.5.1. PID.5.2 will be populated with first names stored in an Excel file.  Using the same Excel file and worksheet will make sure the given name selected is on the same row as the last name selected in PID.5.1.  Now we’ll configure the ${PatientGivenName} variable.
      • Move the mouse over the PID.5.2 field
      • Right-click the field and select Set Variable for PID.5.2
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${PatientGivenName}
      • Set TYPE property to String
      • In Configuration, set Type to Excel File
      • Select Generate sequential list so Excel file content is selected from top to bottom
      • Set File: C:\ProgramData\Caristix\Common\Samples\Excel\PatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: A
      • Set Start new list so it starts with the Excel file top row at each execution
      • Click OK
    • PID.7 – Patient Date of Birth
      The PID.7 field value will be replaced with a variable ${DOB}. At run time, this variable will be replaced/assigned with a generated date. Now we’ll configure the variable.
      • Move the mouse over the PID.7 field
      • Right-click the field and select Set Variable for PID.7
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${DOB}
      • Set TYPE property to String
      • In Configuration, set Type to Date Time
      • Select Generate random values so a random date is generated
      • Set Based on: A specific date
      • Set the date to: 1914-01-01
      • Set In range between 0 and 1200 Month so the generated date will be between 1914-01-01 and 2014-01-01 (1200-month range)
      • Set Date format: yyyyMMdd
      • Click OK
    • PID.8 – Patient Gender
      For the PID.8 field, we will use a new generator. The value will be pulled via a variable ${Gender} using the code set from a conformance profile.  At run time, the code from 0001 – Administrative Sex will set the variable.  This is useful when you want to generate messages from a specification or profile.
      • Move the mouse over the PID.8 field
      • Right-click the field and select Set Variable for PID.8
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${Gender}
      • Set TYPE property to String
      • In Configuration, set Type to Table
      • Select Generate random values so a random code is picked from the profile
      • Set Table Type: User Defined Tables
      • Set Table: 0001 – Administrative Sex
        Click Edit Table… to view/update the table content.  Any profile from the library can be used.  To select a profile, change the reference profile in the Options
      • Click OK
    • PID.11 – Patient Address
      For PID.11 field, we will use one variable to populate several components. Actually, we will create a variable combining several generators.
      • Move the mouse over the PID.11.1 field
      • Right-click the field and select Set Variable for PID.11.1
        The variable window opens
      • Click Add…
        A new row is added to set a new variable
      • Set NAME property to ${PatientAddress}
      • Set TYPE property to String
      • In Configuration, set Type to Excel File
      • Select Generate sequential list
      • Set File: C:ProgramDataCaristixCommonSamplesExcelPatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: D
      • Set Start new list

      Now, we have a street number from the Excel file. The street name is still missing so instead of leaving this dialog, we’ll continue and add another generator to add a street name to the variable.

      • Click Advanced Mode
        A new section appears listing the generators.
      • Click Add
        A new generator is created. Let’s configure this one too.
      • In Configuration, set Type to Excel File
      • Select Generate sequential list
      • Set File: C:\ProgramData\Caristix\Common\Samples\Excel\PatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: E
      • Set Start new list

      Now, we’re done with PID.11.1. Let’s continue with another generator for PID.11.3 (city).

      • Click Add in the generator bar
        A new generator is created.
      • In Configuration, set Type to Excel File
      • Select Generate sequential list
      • Set File: C:\ProgramData\Caristix\Common\Samples\Excel\PatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: G
      • Set Start new list
      • Let’s do the same for PID.11.5 (zip code)
      • Click Add in the generator bar
        A new generator is created.
      • In Configuration, set Type to Excel File
      • Select Generate sequential list
      • Set File: C:\ProgramData\Caristix\Common\Samples\Excel\PatientProfile.xlsx
      • Set Worksheet: Demographics1
      • Set Column: I
      • Set Start new list

      The last step is to format the data generated and add component delimiters

      • Under Generator formatting, add the spaces and delimiters:{0} {1}^^{2}^^{3}^USA^P
      • Click OK

         

Step #3:  Execute

  1. Configure the Receive generated messages task

    • Select the Receive generated messages task
    • Select the Configuration tab
    • Configure an inbound connection where host is 0.0.0.0 and port is 6661.  Set timeout to 30 seconds. 
    • Select it as the Inbound connection
    • Check the Save inbound message to file check box
    • Set File path to C:${CxScenarioSuiteName}.hl7
    • Check the Append checkbox so messages are added to the file.
  2. Set the scenario to execute 100 times so 100 messages are generated.

    • Select the Generate messages action in the suite tree
    • Select the Configuration tab
    • Set Execute: 100 time(s)

       

  3. Execute and generate messages

    • Right-click the Generate messages action
    • Select Run

 A file (C:Caristix Test Tutorial – Generate Message.hl7) is created with 100 messages in it.

Download the test suite and use it to walk through this tutorial.

Enjoy!

Compare messages

This tutorial shows you how to use Caristix software to validate transformations during a conversion project.

Comparing messages: when to use this tutorial

During projects where HL7 interfaces are ported from a legacy integration engine to a new technology, message flows (transformations, etc.) must remain the same.  Actually, message content (structure and semantics) must remain the same.  The challenge is to validate that the interface was ported but that the same transformations and filters still apply. 

Manual validation is not a viable option for most projects.  In this case, best-practice guidance is to automate repetitive, time-consuming and resource-intensive tasks. 

This tutorial shows you how to set up a test suite to validate a small or a large volume of messages easily and quickly.

Overview

The process is straightforward. First, get inbound and outbound messages from your legacy engine; the outbound messages have had transformations applied to them. Second, send those original inbound messages to the new integration technology so the new transformations are applied. Finally, compare both sets of outbound messages, which should be identical. If there are any differences, it means that the transformations on each platform are not equivalent and you need to adjust the code.

Here is a step-by-step explanation.

Step #1: Create a suite

  1. Create a test suite:

    For the purposes of this tutorial, name the suite Caristix Test Tutorial – Message Comparison

  2. Create a scenario:

    Name the scenario How To

  3. Add an action:

    Name the action Compare Messages

  4. Create a “Send HL7 File” task:

    Call this new task Send initial HL7 messages. 

  5. Create a “Receive HL7 Message” task:

    Call it Receive transformed messages.  Note:  This assumes the interface will send the transformed messages back to the application.  If the interface sends transformed messages to a file, use “Read HL7 file” task.

At this point, the suite skeleton is built (ScenarioSuite_32).

Step #2:  Configure the sending and receiving tasks

In this step, you’ll configure the tasks to send the initial set of messages to the new integration engine.  It receives it, transform messages and sends it back to the application.  The application would then be listening to receive transformed messages for validation.

  1. Configure the Send initial HL7 messages task

    • Select the Send initial HL7 messages task
    • Select the Configuration tab
    • Configure an outbound connection where host and port connect to the new interface engine.
    • Select it as the Outbound connection
    • Set File Path to reach the file to send (ex: C:Messagesinitial messages.hl7)
  2. Configure the Receive transformed messages task

    • Select the Receive transformed messages task
    • Select the Configuration tab
    • Configure an inbound connection where host is 0.0.0.0 and port is listening on the related engine.
    • Select it as the Inbound connection
    • Check the Listen until timeout box.  The task will be completed when there are no longer any messages to receive.
  3. Setup the validation rules

    • Select the Validation tab
    • Select the Message Comparison tab
    • Paste the expected messages (your outbound messages from the legacy engine) in the Expected Message section
      To paste them, right-click below the Expected Messages label and select Paste

Step #3:  Execute

Good!  Let’s run the test

  1. Right-click the suite root node (Caristix Test Tutorial – Message Comparison)
  2. Select Run

Once the execution is complete, each tree node will have a status icon. The expected messages should be identical to the transformed messages from the new engine. If the test works, your Expected Messages and Received Messages should be identical.

Convert CSV file to HL7 messages

This tutorial shows you how to create HL7-like messages using a .csv file Caristix software.

When to use this tutorial

We’ve had a lot of questions from users about how to send data from flat files or databases to an HL7 system.  First, we need to keep in mind the HL7 system is expecting messages in a very specific event-based format.   The format would define the list of supported trigger events as well as the list of segments and fields supported for each trigger event.  It would also include attributes such as optionality, repeatability and data length description.  You can even define code sets for specific fields.  In other words, the format is the message specification the system is expecting to receive.

This tutorial explains how to generate valid HL7 messages where data comes from a csv file.

Scroll down to download files used in this tutorial.

Overview

The process is straightforward. First, create a task that includes all the configuration information needed to populate a message template from data sources. To make this example self-contained, we will send the message to a second task. This second task will take the message and save it to a file.  The process then needs to be re-run to process the second (and subsequent) csv file rows.

Here is a step-by-step explanation.

Step #1: Create a suite

  1. Create a test suite:

    For the purposes of this tutorial, name the suite Caristix Test Tutorial – Convert csv file to HL7 messages

  2. Create a scenario:

    Name the scenario How To

  3. Add an action:

    Name the action Generate messages from csv file

  4. Create a “Send HL7 Message” task:

    Call this new task Generate message

  5. Create a “Receive HL7 Message” task:

    Call it Receive generated messages

Step #2: Configure Message Generation Parameters

In this step, you’ll configure the message template and the data sources to populate the template.

  1. Configure the Generate messages task

    • Select the Generate messages task
    • Select the Configuration tab
    • Configure an outbound connection.
      For the purposes of this tutorial, we will not send generated messages to the HL7 system directly but to another internal task (Receive generated messages). So the outbound connection will be bound to  localhost (host is 127.0.0.1 and port is 6661). Set host and port to the HL7 system’s to send the message to the remote system.  Set timeout to 30 seconds.
    • Select it as the Outbound connection

       

  2. Get a message template

    • If you have a single message illustrating what the HL7 system is expecting to receive, paste it in the message zone.  This becomes the message template.  Generated messages will be based on this message.

    – OR –

    • If you don’t have a sample message, start by creating the HL7 system conformance profile.  Several techniques can be used for this.  Refer to the profile creation documentation section to learn more. 
    • Click the Generate message from Profile… button
    • Select the HL7 system conformance profile just created from the Profile library.  For this tutorial, you can use the Caristix Test Tutorial – Convert csv file to HL7 messages.cxp conformance profile provided in the tutorial folder.
    • Select a trigger event.
    • Click OK
  3. Configure fields

    Now you’re going to set up variables to link .csv file fields (data sources) to HL7 fields (message target).  During execution, the HL7 message fields are populated with data from the data source, so a new message is created for each file row.  Here, the data source will be a .csv file, but it can also be an Excel file or a database.

    • Move the mouse over the HL7 field you want to populate with the first field in the data source
    • Right-click the field and select Set Variable for field
      The variable window opens
    • Click Add…
      A new row is added to set a new variable
    • Rename the variable name
    • Set TYPE to String
    • Under Configuration, set Type to Text File
    • Under Generate, select  sequential list
    • Set File to your csv file.  For this tutorial, let’s use Tutorials/Data/Patient Demographics.csv provided the tutorial folder.
    • Set Column: 1
    • Set Column delimiter: ,
      Preview updates showing retrieved data from data source
    • Click OK

    Repeat these steps for each field to be linked to the HL7 message template, changing the variable name and column to pick data from.  Once all fields are linked, move to the next step.

Step #3:  Execute

  1. Configure the Receive generated messages task

    As explained earlier, for the purpose of this tutorial, we will send generated messages to an internal task. If you want to send messages directly to the remote HL7 system, you can skip this step.

    • Select the Receive generated messages task
    • Select the Configuration tab
    • Configure an inbound connection where host is 0.0.0.0 and port is 6661.  Timeout configuration would be 30 seconds. 
    • Select it as the Inbound connection
    • Check the Save inbound message to file check box
    • Set File path to C:${CxScenarioSuiteName}.hl7
    • Check the Append checkbox so messages are added to the file.
  2. Set the scenario to execute several times so several messages are generated

    • Select the Generate messages action in the suite tree
    • Select the Configuration tab
    • Set Execute:  to the number of rows you have in the data source
      The sample data source provided below (Patient Demographics.csv) has 10 rows.  If you’re using this file,  set this number to 10

       

  3. Execute and generate messages

    • Right-click the Generate messages action
    • Select Run

A file (Caristix Test Tutorial – Convert csv file to HL7 messages.hl7) is created with 10 messages in it.

Files used in this tutorial:

Enjoy!

Execute DOS Command

This tutorial explains how to execute a DOS command during a test scenario.  Use this when you want to prepare a test execution to delete result files or run a batch file. 

Using an Execute Command task, you can run accessible executable files.  We will use this task type in this example to run a DOS batch file:

  1. Create an Execute Command task. 
  2. Select the Configuration tab
  3. Set Command line path to C:\mybatch.bat
  4. Add any parameter in the Arguments test area, if needed

The Execute Command task can also be used to run commands directly – for instance, deleting a file.  This time, the cmd.exe executable needs to be called.

  1. Create an Execute Command task. 
  2. Select the Configuration tab
  3. Set Command line path to C:\Windows\System32\cmd.exe.  
  4. Set Arguments to /C del “C:\myFileToDelete.txt”.  Notice the “/C” before the actual command.  This is required by the command interpreter.

Validate Field1 = value

How to build a Segment/Field rule validating that a field has an expected value

In this example, we’ll validate that MSH.9 = ADT^A01. First set up your suite, scenario, and action.

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule:  MSH.9 is = ADT^A01

You can download the rule file for use in Caristix Workgroup or Test software.

Download the rule file (Field1 = value.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field1 = Field2

How to build a build a Segment/Field rule validating that 2 fields have the same value

In this example, we’ll validate that values for EVN.1 and MSH.9.2 are equal.

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule:  EVN.1 is = @HL7(“MSH.9.2”) where @HL7(“MSH.9.2”) refers to the field MSH.9.2 value

validate-field1-field2

 You can download the rule file for use in Caristix Workgroup or Test software.

Download the rule file (Field1 = Field2.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field repetition = value

How to build a Segment/Field rule validating a specific field repetition has an expected value

In this example, let’s validate the following:

  • the first repetition of PID.3.4 = R
  • the second repetition of PID.3.4 = M
  • the third repetition = L
  • also, let’s assume the third repetition is optional and might not be provided
  1. First set up your suite, scenario, and action.
  2. In the inbound HL7 task, select the Validation tab
  3. Select the Segment/Field Validation tab
  4. Click Advanced Mode.  It reveals a few new columns.  The column FIELD # is where you set which repetition the rule applies to.
  5. Add the first rule:  PID.3.4 (Field #1) is = R
  6. Add a second rule: PID.3.4 (Field #2) is = M
  7. Now, let’s add 2 rules for the 3rd repetition. Remember that we need to account for cases where the 3rd repetition is not provided.
    • Add the rule PID.3.4 (Field #3) is = L
    • Add a last rule PID.3 (Field #3) is not present
    • Either rule must be true so we add parentheses around the rules and use the OR logical operator

Download the rule file (Field repetition = value.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field = Field1 from outbound message

How to build a Segment/Field rule validating that a message received field has the same value as a field from the previous sent message

In this example, we’ll validate values for PID.3.1 in the received message is equal to PID.3.1 in the previous sent message.

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule: PID.3.1 is = @HL7(“Scenario\Action\Send HL7 Message”,”PID.3.1″) where refers to the PID.3.1 value of the message defined in the previous task (Send HL7 Message)

validate-field-field1-from-outbound-message

You can download the rule file for use in Caristix Workgroup or Test software. 

Download the rule file (Field = Field1 from outbound msg.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field length

How to build a Segment/Field rule validating that a field’s value has the expected length

In this example, we’ll validate that:

  • PID.2 length >= 10
  • PID.3.1 length = 7

In the inbound HL7 task, select the Validation tab

  1. Select the Segment/Field Validation tab
  2. Add a first rule:  PID.2 is matching regex .{10,}
  3. Add a second rule: PID.3.1 is matching regex .{7}

This illustrates the power of regular expressions.

  • (dot) refers to a character
  • {n,m} refers to the number of characters where n is the minimum and m is the maximum expected

Other quantifiers can be used

  • * : 0 or more
  • + : 1 or more
  • ? : 0 or 1

You can download the rule file for use in Caristix Workgroup or Test software.

Download the rule file (Field length.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field containing a limited set of characters

How to build a Segment/Field rule validating that a field contains a limited set of characters

We’ll validate that PID.19 (SSN Number) is 9 digits long.

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule:  PID.19 is matching regex ^[0-9].*{9}$
    The rule means that from the beginning of the field value (^) up to the end ($), there are digits ([0-9] found 9 ({9}) times.

    Note: The following rule is equivalent:  PID.19 is matching regex ^[0|1|2|3|4|5|6|7|8|9].*{9}$ and just list the allowed characters one by one.  Feel free to change the list of characters to adapt it to your situation.

 You can download the rule file for use in Caristix Workgroup or Test software.

Download the rules file (Field contains some characters only.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field does not contain a set of characters

How to build a Segment/Field rule validating that a field does not contain a set of characters

We’ll validate that PID.19 (SSN Number) doesn’t contain any letters or dashes.

    1. In the inbound HL7 task, select the Validation tab
    2. Select the Segment/Field Validation tab
    3. Add the rule:  PID.19 is matching regex ^[^a-zA-Z-]+$

      The rule means that from the beginning of the field value (^) up to the end ($), there are no characters (^) found in the following ranges:

      • from a to z (lower case letters),
      • from A to Z (upper case letters) and
      • the dash (“-“) character. 

 You can download the rule file for use in Caristix Workgroup or Test software.

Download the rules file (Field not containing some characters.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field is a valid date

How to build a Segment/Field rule validating that a field contains a valid date.  This one is sophisticated – take a look at the logic below.

We’ll validate that MSH.7 (Date/Time of message) contains a date. 

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule: MSH.7 is matching regex ^20dd(0[1-9]|1[012])(0[1-9]|[12]d|3[01])([01]d|2[0-3])([0-5]d)([0-5]d)$

    This rule means that:

    • Year (^20dd): The first 2 characters must be 20 and the next 2 must be numbers
    • Month (0[1-9]|1[012]): Number between 01 and 12
    • Day (0[1-9]|[12]d|3[01]): Number between 01 and 31
    • Hour ([01]d|2[0-3]):  Number between 00 and 23
    • Minute ([0-5]d): Number between 00 and 59
    • Second ([0-5]d): Number between 00 and 59

We think this is a nice one…

 You can download the rule file for use in Caristix Workgroup or Test software.

Download the rules file (Field is a valid date.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field Value Mapping

This tutorial explains how to build a Segment/Field rule validating that a field transformation is based on a mapping table.

In this example, we’ll validate that PID.8 (Administrative Sex) is transformed following this mapping table:

  1. Create an Excel file containing the mapping table as illustrated above
  2. Create a new suite variable
    • Select the root node in the suite tree (suite name)
    • Select the Variables tab on the right
    • Click Add…  A new row is added to the grid
    • Set the variable NAME to GenderMapping
    • Set the variable TYPE to Mapping Table
    • Set the generator type to Excel File
    • Enter the File and the Worksheet the table is in
  3. In the inbound HL7 task, select the Validation tab
  4. Select the Segment/Field Validation tab
  5. Add the rule: PID.13 is = ${GenderMapping[${CxLastOutboundMessage[%PID.8%]}]}

     

    This rule tells the application to:

    • Get initial PID.8 value (${CxLastOutboundMessage[%PID.8%]}):  PID.8 value of the message just sent
    • Get mapping value (${GenderMapping[…]}): Get the mapping value in the mapping table

    In other words, the validation rule loads the mapping table and returns the mapping value (M) for the initial PID.8 field (1).

Download the rules ( Field value mapping.cxf )

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field has no leading 0s

This tutorial explains how to build a Segment/Field rule validating that leading 0s were removed from field.

In this example. let’s validate that PID.3.1 (Patient Identifier) has no leading zeros. 

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add the rule: PID.3.1 is matching regex ^[1-9]d*$

    This rule means that:

    • The first character must be between 1 and 9 (not 0) (^[1-9])
    • The other characters must be digits (between 0 and 9) (d*$)

Download the rules file (Field has no leading 0s.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field is empty

This tutorial explains how to build a Segment/Field rule validating that field has no values

In this example, we’ll validate that PV1.45 (Discharge Date/Time) is not set.

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add a rule: PV1.45 is empty
  4. Click Advanced Mode
  5. Add a second rule: Or PV1.45 is not empty

    These rules mean:

    • First rule:  PV1.45 is not set.  Delimiters are present but there is nothing in the field.
    • Second rule: This rule covers the case where delimiters are not provided for PV1.45.  For instance, the last field is PV1.44 Admit Date/Time with no delimiter at the end.

Download the rules file (Field is empty.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field is in profile code set

This tutorial explains how to build a Segment/Field rule validating that a field value is in a predefined code set.

In this example, we’ll validate that PID.8 (Administrative Sex) is equal to one of the codes in the following table.  The table is preset in a conformance profile. 

To learn more about how to add or customize a table in a conformance profile, refer to the profile documentation

    1. In the inbound HL7 task, select the Validation tab
    2. Select the Segment/Field Validation tab
    3. Add a rule: PID.8 is in table 0001 – Administrative Sex

The rule returns a pass (success) if it can find the PID.8 field value in the conformance profile table.  If it doesn’t, the validation fails.

Download the rules file (Field value is in table.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Field is in list

This tutorial explains how to build a Segment/Field rule validating that a field value is in a list of values.

In this example, we’ll validate that PID.8 (Administrative Sex) is equal to one of the codes in the provided list.  In this case, you set the list within the validation rule.  To refer to a list defined in a conformance profile, see the How to validate field is in profile code set

  1. In the inbound HL7 task, select the Validation tab
  2. Select the Segment/Field Validation tab
  3. Add a rule: PID.8 is in M,F,U

The rule returns a pass (success) if it can find the PID.8 field value in the provided list of values. If it doesn’t, the validation fails.  Make sure each value is separated by a comma (“,”).

Download the rules file (Field is in list.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Validate Segment exists

How to build a Segment/Field rule validating that a segment exists

In this example, we’ll validate that the PV2 segment exists and IN1 doesn’t exist.

  1. First set up your suite, scenario, and action.
  2. In the inbound HL7 task, select the Validation tab
  3. Select the Segment/Field Validation tab
  4. Add a first rule:  PV2 is present
  5. Add a second rule: IN1 is not present

Download the rule file (Segment exists.cxf)

Learn more about how to import validation rules into an inbound HL7 task.

Diagram

A diagram helps you represent the architecture of your systems and the different dataflows between them.

Diagram elements

Diagram elements

Item

A diagram item represents a system or anything that interacts in the environment.

Dataflow

A dataflow represents the path taken by messages or any other type of information within the environment. It also represents the configuration needs by systems to communicate this information.

Segment

A dataflow segment is a part of a dataflow. It represents a link between two items (systems).

Segment Item

A dataflow segment item is one end of a dataflow segment. It represents one of the two end point between two items (systems).

Manually create a diagram

Add Item

Drag a system from the Drawings section on the right and drop it in the main section.

Add dataflow

Drag a dataflow from the Drawings section on the right and drop it on the first item (system) that represents the flow you want to create. Then continue clicking on items to include in the dataflow.

You can also start a dataflow by right-clicking on an item and selecting New Dataflow…

When you’re done, right-click in a blank area and click Confirm Dataflow.

Import from messages

A diagram can be created using message logs and detecting the sending and receiving application value of the MSH segment of each message.

The source can be a file in your Library, a local file, a database or an interface engine using a Caristix Connector.

In the Diagram Editor, click TOOLS -> Import Dataflow… then select From messages and click OK. Browse and select all the files needed to generate the diagram.

Import from Excel

Caristix provides an Excel Template to list your systems and dataflows and import them into a diagram.

The template is included in the application installation and located in %AllUsersProfile%\Application Data\Caristix\Common\Samples\Excel\InterfaceEngineTemplate.xlsx.

Create a  copy of the template file and edit it to represent your environment. Then in the Diagram Editor, click on TOOLS -> Import Dataflows…

In the Import Dataflows window, select From Excel file and click OK, then browse to the Excel file you just created.

Import with connectors

Caristix Connectors can be used to fetch a diagram representation directly from an interface engine.

In the Diagram Editor, click on TOOLS -> Import Dataflows… Select From interface engine and choose the connection to use. You can add or edit the connections by clicking the Connections… link.

Edit Item

Item (or system) information can be edited using the top-right section of the Diagram Editor.

Type

Choose the icon that represents the type of item in the diagram. You can use one of the icons provided or use one of your own image files by clicking the folder icon on the right.

Name

The logical name of the item.

Display Name

The name used to represent the item in the diagram under the item icon.

Vendor

The name of the system vendor.

IP Address

The ip address of the system, if applicable.

Description

A description of the item.

Edit Dataflow

Dataflow information can be edited using the top-right section of the Diagram Editor

Name

The logical name of the dataflow.

Display Name

The name used to represent the dataflow in the diagram.

Description

A description of the dataflow.

Edit dataflow path

Continue a dataflow

To continue a dataflow already created, right-click on the last segment item of the dataflow (an arrow) and click on Unlink…

Then select the other items to include. When you’re done, right-click in a blank area and click Confirm Dataflow.

Split a dataflow

You can split an already created dataflow into two parts.

Simply right-click on one of the segment items where you want to make the split (a dot or an arrow). Then  continue adding items to the current dataflow or confirm the change.

Merge two dataflows

While editing a dataflow, you can merge it with an existing dataflow to create a single entry.

When in edit mode, instead of adding new items, click on the first segment of another dataflow (on the dot in the middle of the segment).

Edit Segment

Dataflow segment information can be edited by right-clicking on the dot in the middle of the segment and clicking Edit Information…

In the Edit Segment window, you can change the type of information that the segment transfers.

You can also edit the list of related documents located in your Library or directly on your local computer.

Edit Segment Item

Dataflow segment items information can be edited by right-clicking directly on the end point (a dot or an arrow) and clicking Edit Information…

In the Edit Segment Item window, you can change the type of end point as well as its configuration properties.

You can also edit the list of related documents located in your Library or directly on your local computer.

Add sub-level details

A diagram is a mutli-level representation. Each item can be expanded and can contain other items.

To expand an item, right-click on it and click Add sub-level details. This will create a new item within the selected item and navigate to it.

Items that have sub-levels will be identified with a little diagram icon in the top-right corner of their normal icon.

Command Line

Command Line

Caristix Workgroup allows you to execute common tasks using a command line. This allows you to automate operations, such as data conversion, de-identification, test execution, etc. To automate operations, you will be able to use the WorkgroupConsole executable located in the software’s installation folder (typically C:\Program Files (x86)\Caristix\Caristix Workgroup).

You can open a command prompt and type the following command to get a list of available commands
WorkgroupConsole.exe help

To get help on a particular command, type
WorkgroupConsole.exe help <command-name>

Available Commands

This command will convert HL7v2 messages from HL7v2-ER7 format (pipe-delimited) to the HL7v2-XML format. To get help with this command, type: WorkgroupConsole.exe help Convert-HL7-to-XML

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help Convert-HL7-to-XML

** Convert-HL7-To-XML **

e.g. Convert-HL7-To-XML C:\first-document.hl7 D:\second-document.hl7 [-cp  -ConformanceProfile "C:\
HL7Reference\HL7 v2.5.1.cxp"] [-r  -Results "D:\results\"] [-lp  -LogsFilePath "C:\logs.txt"]

Source files : The documents to Convert (can also be folders).
-cp [required] : Conformance Profile file path. The value has to be a .cxp path.
-r [optional] : Result folder path. The value has to be a folder [default: .\Results].
-lp [optional] : Logs file path.

This command will convert HL7v2 messages from HL7v2-XML format to the HL7v2-ER7 (pipe-delimited). To get help with this command, type: WorkgroupConsole.exe help Convert-XML-to-HL7

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help Convert-XML-to-HL7

** Convert-XML-To-HL7 **

e.g. Convert-XML-To-HL7 C:\first-document.xml D:\second-document.xml [-r  -Results "D:\results\"] [-rt  
-ResultType "MessageCount 100"] [-lp  -LogsFilePath "C:\logs.txt"]

Source files : The documents to Convert (can also be folders).
-r [optional] : Result file path. The value has to be a file by default. [default: .\result.txt].
-rt [optional] : Result format type: 'InitialFileStructure' to reflect the initial file structure (-r is 
required for InitialFileStructure. The -r value has to be a folder)
'CustomizedSize' to split by file size, in MB, followed by the size amount (The -r value has to be a file)
'MessageCount' to split by message count, followed by the amount (The -r value has to be a file)
'NoSplit' to save the result to a single file (default value) (The -r value has to be a file)
-lp [optional] : Logs file path.

This command will de-identify HL7v2-ER7 messages. To get help with this command, type: WorkgroupConsole.exe help De-Identify-HL7

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help De-Identify-HL7

** De-Identify-HL7 **

e.g. De-Identify-HL7 C:\first-document.hl7 D:\second-document.hl7 -de  -DeIdentificationRules "C:\My DeId
entification settings.cxd" [-cp  -ConformanceProfile "C:\HL7Reference\HL7 v2.5.1.cxp"] [-pi  -PersistentId
entities "D:\persistence-xml.dic"] [-r  -Results "D:\results.hl7"] [-rt MessageCount 100] [-opt  -Options 
GenerateValueOnEmptyField|IgnoreQuote] [-mbd  -MessageBeginningDelimiter "regex"] [-med  -MessageEndingDel
imiter "regex"] [-sed  -SegmentEndingDelimiter "regex"] [-lp  -LogsFilePath "C:\logs.txt"]

Source files : The documents to De-Identify (can also be folders).
-de required : De-identification settings file path.
-cp [optional] : Conformance Profile file path. Required if your de-identification file contains data-type 
settings, or if any de-identification settings have a precondition.
-pi [optional] : Persisted identities file path (if the file already exists, the context will be loaded fr
om it).
-r [optional] : Result file path. [default: .\results.txt].
-rt [optional] : Result format type: 'InitialFileStructure' to reflect the initial file structure (-r is r
equired for InitialFileStructure. The -r value has to be a folder)
'CustomizedSize' to split by file size, in MB, followed by the size amount (The -r value has to be a file)
'MessageCount' to split by message count, followed by the amount (The -r value has to be a file)
'NoSplit' to save the result to a single file (default value) (The -r value has to be a file)
-opt [optional] : Set de-identification options: 'ConsiderIdAsNumeric' to consider 001234 and 1234 as equi
valent
'GenerateValueOnEmptyField' to populate empty field with generated values if applicable
'IgnoreQuote' to consider '1234', "1234" and 1234 as equivalent
remark: GenerateValueOnEmptyField|IgnoreQuote will enable both options.
-mbd [optional] : Message beginning delimiter (in regex format)
-med [optional] : Message ending delimiter (in regex format)
-sed [optional] : Segment ending delimiter (in regex format)
-lp [optional] : Logs file path.

This command will de-identify HL7v2-XML messages, HL7v3 documents, or FHIR-XML resources. To get help with this command, type: WorkgroupConsole.exe help De-Identify-XML

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help De-Identify-XML

** De-Identify-Xml **

e.g. De-Identify-Xml C:\first-document.xml D:\second-document.xml -de <or> -DeIdentificationR
ules "C:\My DeIdentification rules.cxdx" [-cp <or> -ConformanceProfile "C:\HL7Reference\CCD (
Continuity of Care).cxpx"] [-pi <or> -PersistentIdentities "D:\persistence-xml.dic"] [-r <or>
 -Results "D:\results\"] [-lp <or> -LogsFilePath "C:\logs.txt"]

Source files : The documents to De-Identify (can also be folders).
-de required : DeIdentification rules file path.
-cp [optional] : Conformance Profile file path.
-pi [optional] : Persisted identities file path (if the file already exists, the context will
 be loaded from it).
-r [optional] : Result folder path. The value has to be a folder [default: .\Results].
-lp [optional] : Logs file path.

This command will execute a Caristix Scenario Suite. To get help with this command, type: WorkgroupConsole.exe help Execute-Test

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help Execute-Test

** Execute-Test **

e.g. Execute-Test C:\myScenarioSuite.cxs [-r <or> -ReportingEnabled y] [-rp <or> -ReportPath C:\resultingR
eport.xlsx] [-e <or> -LogExecutionEnabled y] [-ep <or> -LogExecutionPath "C:\ProgramData\Caristix\Caristix 
Test\Execution logs\"] [-run <or> -PathsToRun "scenario 1/action 1" "scenario 2/Action 1/task 1"] [-skip <
or> -PathsToSkip "scenario 1/action 1" "scenario 2/Action 1/task 1"] [-lp <or> -LogsFilePath "C:\customLog
Path.log"] [ -var <or> -EditVariables "${MyVariable}[0].LimitationMax=5" ] [ -env <or> -Environments "MyEn
vironment" ]

Source file : The ScenarioSuite file to execute
-r [optional] : Output an Excel report file or not (y or n, default is n)
-rp [optional] : Excel report file path (default is '.\report.xlsx')
-er [optional] : Include extended report details. (y or n, default is y)
-e [optional] : Save execution result (y or n, default is n)
-ep [optional] : Execution result path (default is '.\result.xml')
-run [optional] : List of scenarios, actions and tasks to run in the scenario suite (cannot be used with 
-skip)
-skip[optional] : List of scenarios, actions and tasks to skip in the scenario suite (cannot be used with 
-run)
-skip "scenario 1" should skip the scenario 1
-skip "scenario 1/action 2" should skip the action 2 in scenario 1
-skip "scenario 1/action 2/task 1" should skip the task 1 in scenario 1/action 2
-lp [optional] : Logs file path (default is 'TestConsole.log')
-var [optional] : List of scenario suite variables to edit while running the suite.
-env [optional] : Active environment name (default is the environment active set in the scenario suite)

This command will compare 2 sets of HL7v2-ER7 messages and create a report listing differences.

To get help with this command, type: WorkgroupConsole.exe help Message-Comparison-HL7

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help Message-Compar
ison-HL7 ** Message-Comparison-HL7 ** e.g. Message-Comparison-HL7 C:\first-document.hl7 C:\second-document.hl7 [-cfg -Configurat
ion "C:\Message Comparison Configuration.xml"] [-r -Report "C:\reportpdf"] [-rc -ReportCo
mments ""] [-or -OpenReport] [-lp -LogsFilePath "C:\logs.txt"] Source files : The documents to Convert (can also be folders). -cfg [optional] : Message Comparison Configuration file path -r [optional] : Report file path (pdf or .xlsx) -rc [optional] : Report comments -or [optional] : Open the report after the generation is completed -lp [optional] : Logs file path.

This command will extract a subset of HL7v2-ER7 messages, according to the provided filter rules.

To get help with this command, type: WorkgroupConsole.exe help Search-And-Filter-HL7

C:\Program Files (x86)\Caristix\Caristix Workgroup>WorkgroupConsole.exe help Search-And-Filter-HL7

** Search-And-Filter-HL7 **

e.g. Search-And-Filter-HL7 C:\first-document.hl7 D:\second-document.hl7 -sfr  -SearchAndFilterRules
"C:\MySearchAndFilterRules.cxf" [-cp -ConformanceProfile "C:\HL7Reference\HL7 v2.5.1.cxp"] [-r -R
esults "D:\results.hl7"] [-rt -ResultType "MessageCount 100"] [-lp -LogsFilePath "C:\logs.txt"] Source files : The documents to Search And Filter (can also be folders). -sfr [required] : Search-and-filter rules file path. -cp [optional] : Conformance Profile file path. Required if SearchAndFilterRules need reference to
the spec. -r [optional] : Result file path. efault: .\result.txt]. -rt [optional] : Result format type: 'InitialFileStructure' to reflect the initial file structure
(-r is required for InitialFileStructure. The -r value has to be a folder) 'CustomizedSize' to split by file size, in MB, followed by
the size amount (The -r value has to be a file) 'MessageCount' to split by message count, followed by the
amount (The -r value has to be a file) 'NoSplit' to save the result to a single file (default value)
(The -r value has to be a file) -lp [optional] : Logs file path.

Execute Test

The Execute-Test command executes Caristix Test Scenario suites. The syntax is as follows:

$ Execute-Test C:\myScenarioSuite.cxs

The command takes one argument, which is the full path to the Scenario Suite source file.

Flags

You can also provide the following optional flags:

-ReportingEnabled

Abbreviated as -r. Output an Excel report file or not. Accepted values are y (yes) or n (no). Default is n.

$ Execute-Test C:\myScenarioSuite.cxs -r y

-ReportPath

Abbreviated as -rp. Excel report file path. Default is ‘.\report.xlsx’.

$ Execute-Test C:\myScenarioSuite.cxs -r y -rp C:\resultingReport.xlsx

-LogExecutionEnabled

Abbreviated as -e. Save execution result or not. Accepted values are y (yes) or n (no). Default is n.

$ Execute-Test C:\myScenarioSuite.cxs -e y

-LogExecutionPath

Abbreviated as -ep. Execution result path. Default is ‘.\result.xml’.

$ Execute-Test C:\myScenarioSuite.cxs -e y -ep "C:\ProgramData\Caristix\Caristix Test\Execution logs\"

-PathsToRun

Abbreviated as -run. List of scenarios, actions and tasks to run in the scenario suite. Accepted values are the paths to those scenarios, actions or tasks within the Scenario Suite. Cannot be used with -skip.

$ Execute-Test C:\myScenarioSuite.cxs -run "scenario 1/action 1" "scenario 2/Action 1/task 1"

-PathsToSkip

Abbreviated as -skip. List of scenarios, actions and tasks to skip in the scenario suite. Accepted values are the paths to those scenarios, actions or tasks within the Scenario Suite. Cannot be used with -run.

$ Execute-Test C:\myScenarioSuite.cxs -skip "scenario 1/action 1" "scenario 2/Action 1/task 1"

-LogsFilePath

Abbreviated as -lp. Logs file path. Default is ‘TestConsole.log’.

$ Execute-Test C:\myScenarioSuite.cxs -lp "C:\customLogPath.log"

-EditVariables

Abbreviated as -var. List of scenario suite variables to edit while running the suite. Click here for more information on the EditVariables flag.

$ Execute-Test C:\myScenarioSuite.cxs -var "${MyVariable}[0].LimitationMax=5" "${MyVariable}.LimitationMin=2"

EditVariables

The EditVariables flag allows you to manually change the properties of variables’ value generators. Its syntax contains 4 elements.

Variable Name

The scenario suite variable’s full name.

Variable Index

Optional. If the variable’s value generator has multiple sub-variables, you can specify the index of the sub-variable you want to edit. By default, the index is 0.

Property name

The property of the variable’s value generator that you want to edit.

Value

The value you want to assign to the modified property.

$ Execute-Test C:\myScenarioSuite.cxs -var "${MyNumeric}[2].IncrementSequenceValue=0.6"

 

Generator Types

The following value generator types are available in Test:

Options

Options in Workgroup

From the Main Menu, click Tools, then Options in the drop-down menu that appears.

A new Options window opens.

Check for updates on startup.

  • Every time you start Workgroup, the software will check for available updates. You can manually check for updates by going to HelpCheck for Updates

Show tips

  • To display information boxes that will provide guidance on Workgroup features. To hide a tip permanently, click the “X” button.

Tooltip_HideTip_v3

Use the “Reset hidden tips” link to restore all hidden tips.

Show Did You Know

  • Every time you start Workgroup, the software will show a ‘Did you know’ article.

Always ask to delete confirmation

  • If checked, each time you perform a delete operation, you will be asked to confirm your action.

Always ask for confirmation when a change might affect other elements

  • If checked, each time you edit a property in a profile that might affect other elements, you will be asked to confirm your action.

Collaboration

The built-in collaboration back-end allows customers, vendors, and 3rd parties to work as a team on one or more interfacing projects, with appropriate permission levels set by the account owners.  Teams can now collaborate on creating, sharing, and tracking interface profiles and associated tasks.

Set access rights based on user role within the repository.  Those roles are:

RoleAccess Rights
Guest
  • Read-only access to folder contents.
  • Guests cannot modify files.
  • Role usually assigned to users who need to view a file as a reference document.
Contributor
  • Read-write access to folder content.
  • Can modify and create profiles or add files within this folder.
  • Role usually assigned to users who are actively working on an interfacing project.
Manager
  • Read-write access to folder content.
  • Can modify and create both folders and files.
  • Can also invite new project community members (users).
  • Role usually assigned to users managing the interfacing project.
Owner
  • This is a special role. This access right is set up during Caristix implementation for the customer’s billing contact. The role cannot be re-assigned without contacting Caristix.

Further role-related tasks:

Join a Library with an Email Invitation

If you’ve received an email invitation to join a Library, do the following:

  • Install Caristix software. Contact the Library manager for your installation files.
  • Log in to the Library using the email and password provided in the email invitation.

Access a Library

To access a shared library, follow these steps:

    • On the right-hand side of the Main Menu, click the LOGIN button.
    • Enter the following information:
 LabelValue to enter
Notes
Server URLhttp://central.caristix.comThis is the default value. If Caristix Workgroup is deployed within your organization, ask the person who invited you for the Server URL.
EmailYour email addressThe email address must be the one you used to register to the service.  Please refer to the invite email for more details.
PasswordYour passwordThe initial password was provided in the invite email.  We recommend you change it the first time you log in to the system.
  • Click Login
  • If you can access more than one Library, a dialog will appear:
    • Select the library you want to connect to
    • Click Select

Once logged in, the Library is accessible.

The HL7 Reference folder contains standard HL7 International profiles. They are the official profiles as defined by the standardization organization. They are read-only and are used as reference only. However, you can create copies in a new folder for further customization.

The other folders contain profiles created and shared by you and your team members. Feel free to take a look at them.

Access a Library

To access a shared library, follow these steps:

    • On the right-hand side of the Main Menu, click the LOGIN button.
    • Enter the following information:
 LabelValue to enter
Notes
Server URLhttp://central.caristix.comThis is the default value. If Caristix Workgroup is deployed within your organization, ask the person who invited you for the Server URL.
EmailYour email addressThe email address must be the one you used to register to the service.  Please refer to the invite email for more details.
PasswordYour passwordThe initial password was provided in the invite email.  We recommend you change it the first time you log in to the system.
  • Click Login
  • If you can access more than one Library, a dialog will appear:
    • Select the library you want to connect to
    • Click Select

Once logged in, the Library is accessible.

The HL7 Reference folder contains standard HL7 International profiles. They are the official profiles as defined by the standardization organization. They are read-only and are used as reference only. However, you can create copies in a new folder for further customization.

The other folders contain profiles created and shared by you and your team members. Feel free to take a look at them.

User Account

To change your user information:

  • Log in to a Caristix application.
  • In the Main Menu, click on the white arrow to the right of your username
  • Select User Account
  • Change your information
  • Click OK

Change My Password

To change your password:

  • Log in to a Caristix application.
  • In the Main Menu, click on the white arrow to the right of your username
  • Select User Account
  • In the User account dialog box, select Change Password
  • Enter your current password and your new password (twice).
  • Click OK

Passwords are case-sensitive, must be 8 characters long, and cannot contain spaces. Make sure your password is strong enough to protect any sensitive information the Library might contain.

Contributor Tasks

As a Contributor, you will be able to perform tasks related to integration content creation and editing, such as:

Add New Documents to the Library

To share documents with the rest of the group, you need to add them to the Library.  You can do so using one of the following ways:

Import document

  • Navigate to the Documents view
  • Right-click the folder you want to add the document to
    Note: You can also create a new folder by right-clicking the parent folder and select New –> Folder
  • Click Import Document…
  • Select the document(s) you want to share.

Documents will be uploaded to the library and made available.

Drag document to the library

  • Navigate to the Documents view
  • Select the document(s) or folder(s) and drag them to the destination folder

Documents and folders will be uploaded to the library and made available.

Once documents are shared…

Once documents are shared, you can manage sharing and privileges and/or manage notifications when documents are modified.

Work With Previous Document Versions

As you work through an interfacing project, you may need to consult older versions of a document. The internal storage structure of Caristix Workgroup makes it possible to view and retrieve previous versions of documents.  Each version is stored and can be accessed as needed.

To view the list of previous versions:

  • Log in to a Library in Caristix Workgroup.
  • Right-click a document
  • Select Version History

From here, you can:

If you’ve selected a Profile, you will also be able to 

View a Previous Version

To view a different document version:

  • Navigate to the list of previous versions of your document.
  • Select the version you want to view.
  • Click View. The version of the document will be shown. 
  • You can modify it and save it as a new document into your library.

Promote a Previous Version

You may need to undo several changes and revert to a previous version of a document.  Or you may want to promote a previous version as the working version.  Promoting a previous version will replace the current and latest version with the version you select.

To restore a previous version:

A dialog appears, stating that the previous version of the document will replace the current one

  • Click OK

The promoted version is now the current document.

Compare a version with the working copy

Compare an older version with the working copy

You can compare an older version with the current version of your Profile. To do so:

  1. Right-Click any version item from the Version History window. 
  2. Click Compare With Working Copy…

The Gap Analysis Workbench will open, showing you differences between the current version (left-side) and the selected version (right side).

Compare two versions

Compare Profile versions

You can compare Profile versions. To do so:

  1. Select the two versions of the items you want to compare.
  2. Right-Click any of these item.
  3. Click Compare versions…

The Gap Analysis Workbench will open, showing you differences between the selected versions.

Manager Tasks

Invite Other Users

You can invite others to join your Library so you can all work on the same documents and artifacts when needed.  That would avoid multiple versions of the same document going around.

To invite a new users to join your Library:

  • Log in to a Library in Caristix Workgroup.
  • In the menu bar, on the right, click the arrow beside your username
  • Click Manage Library. The Manage Library screen appears
  • Select the Users tab and click the Add… button.
  • Check the Administrator checkbox if the user is an administrator of the Library. Administrator rights cover management of the Library.  Administrator will automatically be assigned manager role to all folders.  We recommend giving administrator privileges to your core team members.
  • To configure group membership, see the Managing Groups section
  • To configure folder access for new users,  see the Managing Sharing and Privileges section
  • To configure user notifications, see the Managing Notifications section

An email is sent to new users notifying them of their new accounts. Users also get an automatically generated password.

Manage Sharing

Sharing permissions are folder-based.  Manage folder access as follows:

Add New User

  • Log in to a Library in Caristix Workgroup.
  • Navigate to a folder, and right-click.
  • Select Properties and select the Sharing tab
  • Click the Add… button. A new row is added to the grid
  • In the first cell (member column), select the user or group you want to add, and select a role.
  • If the user you’re looking for is not on the list, the user is not part of the Library community.  Invite him or her to join. Once invited, the user will appear on the list.

Change User Roles and Access Rights

  • Log in to a Library in Caristix Workgroup.
  • Navigate to a folder, and right-click.
  • Select Properties and select the Sharing tab
  • Select a user and select a new role.

Update Global Sharing through Library Manager Window

This is useful when you want to change sharing permissions for an entire group.

  • Log in to a Library in Caristix Workgroup.
  • In the Main Menu, click the arrow beside your username
  • Click Manage Library.
  • Select the Users tab
  • Select a user
  • To update role:
    • Check/Uncheck the Administrator check box
    • A library administrator can manage users, groups and notifications
  • In  the tab panel at the bottom, select Sharing tab
  • To update sharing permissions:
    • Select the folder you want to update
    • Select the new role
  • To add a sharing permission:
    • Click the Add... button. A new sharing row appears
    • In the first cell (folder column), select the library folder  you want to add
    • In the second column (role), select a role for the user
  •     Click OK

Inherited Membership, Sharing and Notifications

When a user (or a group) is a member of another group (see Manage Groups), group settings will applied. These settings will be shown in the user’s membership, sharing permissions and notifications as read-only. Settings specific for the current user-group will be editable.

For example, we have a group (Group A) with a sharing permission on the folder “HL7 References”. If you set a user (John Doe) as a member of the Group A, and add a new sharing permission for John Doe, you will see 2 permissions. The first row (grayed) represent the permission inherited from the membership to Group A. The second row is a permission specifically set to John Doe.

Manage Groups

Groups can be very useful when you have several users with similar sharing permissions accessing a Library .  Placing users in groups simplifies access management, since you can apply across-the-board changes easily.

For instance, if you work for an HIT vendor or consulting firm – and need to provide guest access to hospital or provider users, you might want to manage all hospital users as a single group.  This will be easier to manage than setting permissions individually, and you’ll ensure that everyone in the group has the same privileges.

Manage these group and access sharing permissions from the Manage Library section.

Note:  To create groups, you need Administrator rights. Refer to Manage Sharing to learn how to assign Administrator rights to a user.

Create Groups

  • Log in to a Library in Caristix Workgroup.
  • In the Main Menu, click the arrow beside your username
  • Click Manage Library.The Manage Library appears
  • Select the Groups tab
  • Click Add… button
  • Provide a group name and a description
  • Click OK

Add Users/Groups to a Group

  • Log in to a Library in Caristix Workgroup.
  • In the Main Menu, click the arrow beside your username
  • Click Manage Library. The Manage Library appears
  • Select the Groups tab
  • In the tab panel at the bottom, select the Group Members tab
  • Click Add… button. A new row appears
  • Select the user or group to add. (Note that you can add one group to another.)
  • Click OK

Manage Notifications

Notifications are quick email updates that are automatically sent to users when Library content is changed.

Notifications are set on folders, not individual documents. There are two notification types:

  • Creation: sent when a document is created.
  • Modification: sent when a document is modified or edited.

To add a notification, you need Manager privileges for the folder you are configuring.  Refer to section Manage sharing and privileges to learn how to provide Manager privileges to a user.

Add a Notification

  • Log in to a Library in Caristix Workgroup
  • In the Main Menu, click the arrow beside your username
  • Click Manage Library
  • Select the Users tab and pick a user
  • In the tab panel at the bottom, select Notifications tab
  • Click the Add… button. A new notification row appears
  • In the first cell (folder column), select the library folder you want to add
  • In the second column (alert), select the alert you want to set to the user on this folder
  • Click OK

Modify a Notification

  • Log in to a Library in Caristix Workgroup
  • In the Main Menu, click the arrow beside your username
  • Click Manage Library
  • Select the Users tab and a user
  • In the tab panel at the bottom, select Notifications tab
  • Select the notification you want to update
  • Select a new notification
  • Click OK

How To / Tutorial

Provides step-by-step guides and practical tutorials designed to help users understand and implement features efficiently. Each tutorial breaks down complex processes into clear, actionable steps, making it easy to follow along and achieve desired results. Whether you are a beginner or an advanced user, these guides offer structured instructions, helpful tips, and best practices to ensure smooth execution.

De-Identifying HL7 messages

De-Identifying HL7 messages

To help you understand how to use CaristixTM Workgroup, see it in action in this video. The procedure is similar to CaristixTM Cloak. Only one step is added at the beggining. A transcript is below to help you follow the steps.

Transcript

The application would replace PHI with new patient data generated at run-time, keeping patient history but removing any link with the actual patients.

Open the Caristix Workgroup application.

Click on Messaging v2 → De-Identify…

Click Yes to load the default de-identification rules. They are in line with the HIPAA rules for HL7 standard compliant messages.

Click No to create or load your rules.

To get started, let’s open the de-identification module and load a file containing HL7 messages. Message could also be loaded from a database or directly from your interface engine if you have the connector installed.

Open HL7 v2.x messages you want to de-identify:

Click FILE → Open → Messages… → +Add…

Choose the files containing the messages. If it is saved on your computer, click Browse My Computer.

The chosen file will be added to the file list.

Click Next > to load the file content.

Your message will appear in the Original section and an example of your message de-identified will appear in the De-identified section.

(0:35) All de-identified data in messages is in red so you can see the actual message and the result.

(0:41) The application comes with a set of de-identification rules. It covers all standard HL7 fields HIPPA identified as containing sensitive data. If messages contain customized fields or Z-segments, go ahead and customize rules.

If needed, you can modify the de-identification rules. Look at this video if you need help.

Once all rule configurations are as wanted, click View Example. You can see an example of the result in the De-identified section. If anything is not as expected in the response, continue customizing the rules.

Set the dictionary:

Click TOOLS → Option… → Settings → Enable Re-apply rules and replacement data across multiples files.

You can create as many dictionaries as needed. For this tutorial, let’s create a new dictionary called HL7Deid. Replace the file name with: C:\ProgramData\Caristix\Carisitx Workgroup\Temp\HL7Deid.dic

(0:58) Once de-identification rules are set, it’s time to launch it so all messages are de-identified and stored in files. At the end of the processing, if needed, an audit PDF file can also be created, documenting all settings de-id was done with.

Click OK → De-identify. → Choose where to save the result. Click Browse My Computer to save it onto your computer. → OK → Yes if you want to create a De-identify Process Report in PDF.

(1:14) This ends the “De-Identifying HL7 Messages” introduction tutorial.  If you have any question, feel free to contact us. We love questions and feedback!

Thanks for watching

De-Identifying CCD and XML files

De-identifying CCD and XML files

To help you understand how to use CaristixTM Workgroup to de-identify CCD and XML files, see it in action in this video. The procedure is similar to CaristixTM Cloak de-identifying HL7 files. Only one step is added at the beggining. There are slight differences, but the video and the adapted transcript will help you understand the steps.

Transcript

The application would replace PHI with new patient data generated at run-time, keeping patient history but removing any link with the actual patients.

Open the Caristix Workgroup application.

Click on Messaging v3 → De-Identify…

Click Yes to load the default de-identification rules. They are in line with the HIPAA rules for HL7 standard compliant messages.

Click No to create or load your rules.

To get started, let’s open the de-identification module and load a CCD or XML file. Message could also be loaded from a database or directly from your interface engine if you have the connector installed.

Open CCD or XML you want to de-identify:

Click FILE → Open → Messages…

Choose the files containing the messages. If it is saved on your computer, click Browse My Computer.

Your message will appear in the Original section and an example of your message de-identified will appear in the De-identified section.

(0:35) All de-identified data in messages is in red so you can see the actual message and the result.

(0:41) The application comes with a set of de-identification rules. It covers all standard CCD identified as containing sensitive data. If messages contain customized fields or Z-segments, go ahead and customize rules.

If needed, you can modify the de-identification rules. Look at this video if you need help. It explains how to modify HL7 rules, but the process is the same.

Once all rule configurations are as wanted, click View Example. You can see an example of the result in the De-identified section. If anything is not as expected in the response, continue customizing the rules.

Set the dictionary:

Click TOOLS → Option… → Settings → Enable Re-apply rules and replacement data across multiples files.

You can create as many dictionaries as needed. For this tutorial, let’s create a new dictionary called XMLDeid. Replace the file name with: C:\ProgramData\Caristix\Carisitx Workgroup\Temp\XMLDeid.dic

(0:58) Once de-identification rules are set, it’s time to launch it so all messages are de-identified and stored in files. At the end of the processing, if needed, an audit PDF file can also be created, documenting all settings de-id was done with.

Click OK → De-identify. → Choose where to save the result. Click Browse My Computer to save it onto your computer. → OK → Yes if you want to create a De-identify Process Report in PDF.

Configure Ensemble/Caché connection

See the procedure to connect with Ensemble/Caché database:

Disable timeout on Caché connction.
    The Intersystems support team proposed a workaround. The ODBC DSN can be configured to disable timeout.
    Create a ODBC DSN
    1. Click the 
Configure…
     button
    1. Check the 
Disable Query Timeout
     check box
    1. Click 
OK
    1. Start Pinpoint 
→ FILE → Open log files… → Database
    1. Click 
Sources…
    Pick the source for Ensemble
    1. Click 
Connections…
    1. In 
Database Type
    1. , pick 
ODBS DSN
     you just configured
    1. Validate the DSN is correct by clicking on 
Test
    1. Click 
OK → OK
    At this point, you should have the list of services and opertions.