Thursday, August 20, 2015

From Allscripts to OnBase: A Migration Story

In healthcare, it’s about the patient’s chart, tests, and results, which all starts in the quality of the information extraction from Allscripts. Of course, when you move from one solution to another, people tend to get in the way. What I mean by this is that there usually needs to be a consultant to broker between each sides of the transfer.

Information

Requirements gathering is only as good as the person’s experience who’s managing it. A third may boast many years of experience, however, chances are good that there are a few key internal architects that need to be involved and listened to from the get-go. The mapping of patient data, doc types, and workflows are all fundamental to the success of the migration. The sizing parameters will have to be detailed, such as, number of files, average file size.

Execution logistics

As the “what” questions are answered, inevitably, new ones come up, such as “What transfer batch size should we use?”, “How long will it take to transfer each batch?” and “When will we shut down access to Allscripts?”, etc. A “who/what/when/where/how” matrix will help sort out the details. The transfer batch will depend on how many document pages are extracted and will entail multiple folders and file naming convention.

Testing

Testing and its organization are a litmus test of how well the project is managed. Hopefully all of the application’s SMEs are involved and designated Users are scheduled to test. Testing ADT scenarios and OnBase scanning and indexing are essential. Also, testing the backfill of accounts and demographics has to be done at least twice to get all of the pieces validated.

Extract

The Allscripts extract index will be delimited and should have all of the metadata needed to import into OnBase. It should have a pointer to the scanned files. These files will most likely be extracted pages, so there will need to be a unique number to be able to build the document before importing into OnBase. You’ll need the following:
Patient Demographics – enough information to validate and tie back to the ADT patient record.
Master Patient Index – the corporate medical record number to identify the extract as it is imported to OnBase
Document types – each type must go through HIM for validation. If Allscripts was used for ambulatory sites, chances are good that the naming of doc types will reflect this. You may want to just keep all the source names and prefix them with a value that allows for easy recognition. These should also be set up in a new doc type group if there are a lot of them. The “go forward” strategy of scanning should include only the unique doc types that were not already in the Onbase system.
Document formats – there may be some formats that surprise you. Import formats have to be identified and set in the OnBase import process.
List of file pointers – this list tells OnBase the page’s file system locations. If there are thousands of pages, the pointer will most likely have a folder in it that changes as the batches are imported. The filing naming convention will have to be unique and could be extracted from Allscripts’ unique file naming.

Data Manipulation

Between any focused information management system there are bound to be idiosyncrasies. For example, the patient medical record numbering scheme will be different, the rules around naming conventions of metadata, and many other conventions specific to the infrastructure of the source system.

Registration, EMR and interface

With different naming conventions of patient metadata between the source and target systems, the issues of data integrity could be compounded. During testing all sorts of issues can come up from ADT messages not having correct patient data, to patient corporate numbers being duplicates. Quality control and validation as a separate can help fix issues before they go into the target system.

Import

As we know the quality of the import starts with quality of the extract. As the patient data issues get mapped and fixed, the actual importing will have to be achieved through a third party tool or customization. Every difference that is not accounted for in the design or development will definitely show up as errors during testing of this process. Using the tool to look up values in the target system based on source values should be part of the tools requirements. The tool should be able to handle errors gracefully by logging them and queueing them for reimporting.

People

The human factor cannot be overlooked with any migrations. Both sides will have feelings though they may be hidden behind professional facades: some were comfortable with the old system and others are not happy about the new responsibilities of the new one. These feelings will show themselves with delays in development, or show stopper issues during testing. If everyone is complaining about the project manager then you know the culture is old school in that the walls of knowledge continue to be fortified, that sharing only occurs when show stoppers force everyone to open up and help each other if only for just that moment…


Tuesday, August 11, 2015

ECM Matures Beyond the Models

Gartner’s ECM Maturity model from 2012 shows the simplified journey of an ECM implementation into a company through time. The trail followed is well worn into the trained minds of solution providers. What typically happens with this methodology mantra is that it infiltrates and pervades, then when fully dependent upon the software solution that touted it, the company buys more modules, more licenses, more storage, etc.

How mature is mature?

ECM implementations reach the goals put forth be the company’s director in charge of operations. Each time a “nice to have” is overlooked or pushed to further phase, it may reach a dead end. These dead ends accumulate, but are not factored into the overall implementation; they fester and show up again when the next cycle of solutions/consolidations/open source evangelists sweep in.

It’s easy to win with a Model

I’ve seen original solution documentation show this maturity model from the beginning. The instructions are outlined, budgeted, and milestones are set. All you have to do is do what it says to do and you will succeed. Any movement forward is seen as a win. Plus, you executed on the plan, never mind that dead ends were left along the way. You can’t please everyone!

The model is the model

The model’s steps of “Initial, Opportunistic, Organized, Enterprise, and Transformative” are exactly what happened to this model. It is an artifact of solution execution, but in the end it is just another way to sell product suites and Gartner products. This model matured. A new one is coming.

Implementation’s long tail


If you are looking at models make sure the last half the curve looks like a thick long tail. This shows the correct long term implementation of ECM. It takes mature team of people to fully realize its potential beyond initial expectations and bouts of disillusionment. Over time, if you are lucky there will be enough focus on the quality of information and its benefit to productivity.