Sunday, January 27, 2013

ECM: Point vs. Enterprise Solution


The industry is healthcare (so lower budget by default for IT projects). The project is a document control system which needs to manage the publishing, approvals, security, and readership of policies and procedures.


The team has seen other point solutions which are exactly what they want. In their world, these solutions are perfect, however in the larger world of trying to make connections between the morass of information within the entire hospital system, the ECM solutions needs to be used, or at least integrated.

Failure in the Past

Program and port folio management was not part of the ECM initial rollout. There was no information architecture from a content management perspective. The informatics and integration architects have no idea what the implications of using the correct metadata for index/search means at the enterprise level. They know what to so with patient account data and paper forms and clinical notes, but when comes to weaving this information together, not so much.

Finding a solution

So put aside the lack of PMO and let's focus on how to blend the desired point solution with ECM. Paper is still used for reviews and approvals of the SOPs. The groups responsible for writing the procedures are not very familiar with metadata, they really scratch the surface with MS Office products. Trying to demo an ECM workflow with versioning, convert to PDF, and electronic signatures did not go over well. The group needed to drive with a permit and an instructor first.


  • Use ECM workflow solution, dump files in the existing repository as a pilot and keep revision and approval outside the system.
  • Scan existing SOPs into the system and just using the scans as for reader acknowledgement and approval.
  • Use Sharepoint as a bridge between MS Office and the existing ECM solution. The problem with this is that Sharepoint could easily take over the whole process.
  • Build on the status quo of the paper process, but use MS Office for revisions, Acrobat for signatures, email for workflow and folders for storage and presentation to the readers.
  • Pilot the point solution and integrate it with ECM at the reader acknowledgement and presentation stages.

Politics and Operations

Who has the money? Without PMO, the money for projects magically appears. So, the doc control group seems to think they have the money for the point solution. Politically, does IT dictate the ECM solution as it has implemented it with the assumption that there would be ROI only with penetration and scale.


  • Level set the group's understanding of the publishing tools, the revision and approval process, common metadata (ie Dublin Core), records retention, complying with the rules (HIPAA, CAP), etc. 
  • Find bridges between the old way and the new way of doing things, like using MS Word for revisions, isolating certain workflow steps like approvals and using a point solution for it, or using Acrobat for electronic signatures. 
  • Push for using the ECM solution as the metadata and content layer which will at least assure that the "big data" guys will be able to find the content and try to use it in their big data cubes. 

Wednesday, January 23, 2013

Designing an Export Tool for Livelink

Designing and Export Tool for Livelink

For those of you who have not created an export tool, here's a high level write up of one. 


The Export Tool has been developed to handle multiple ways to query and export content with metadata derived from system and category attributes.

Key Features

  • Multiple Query Methods
  • Export content and metadata
  • Recovery from failed exports
  • Duplicate detection
  • File hash reporting for validation
  • Exports generations, shortcuts, all versions, and compound documents

High Level Flow of Export Methods

Input Parameters

Sample Value
Server = args[0];
OT content server host name
Port = args[1];
Port of service
DFT = args[2];

Not used
User =  args[3];
Administrator user name
Pass =  args[4];
Administrator password

dbURL= args[5];
Database connection
dbUser= args[6];
Database User
dbPwd= args[7];
Database Password

ExportFilePath = args[8];
File system path to export content files to.
CSVFilePath = args[9];
File system path to write CSV files to
TrackFilePath = args[10];
File system path to write tracking file to
QueryString1 = args[11];
"ID to search on"
Query string to pass into export tool
TrackFileValue = args[12];
File path to point to the tracked ids that have already been exported

Sample Export Tool command line:

java LLExportTool "docadm" "2099" "" "Admin" "password" "jdbc:oracle:thin:@databasename:1521:repname" "username" "userpassword" "c:\\RepExport\\" "c:\\RepExportCSV\\" "c:\\RepExportTrack\\" "0000000" "C:\\RepExportTrack\\ExportTrackerXXX.txt"

Export Search Query Types and Methods


QueryString1: "select dataid, parentid, name, versionnum from dtree where name like '%Test%' " ;

runExportBySQLQuery(session, doc, conn, ExportFilePath, CSVFilePath, QueryString1, QueryString2, LLExportLog);


QueryString1 is a predetermined parent folder id.

String QueryType = "Folder";
boolean DryRun = false;
int WaitTime = 0;

runExportByFolderId(session, doc, conn, ExportFilePath, CSVFilePath, QueryString1, QueryString2, LLExportLog,
QueryType, DryRun, WaitTime, TrackDataIds, LLExportTracker);


String QueryStringKeyWords = "test";
int ResultNumber = 100;       

runSearch(session, doc, conn, ExportFilePath, CSVFilePath, QueryStringKeyWords, QueryString2, ResultNumber, LLExportLog);


String QueryStringCollection = "select dataid from collections where COLLECTIONSID = 1 "

runExportByCollection(session, doc, conn, ExportFilePath, CSVFilePath, QueryStringCollection, QueryString2, LLExportLog);



Build test case folders, files, shortcuts, generations, etc.
Produce results using Pathbuilder or XML export
Verify Export Tool results with those produced by the validated tool

Saturday, January 12, 2013

Quality and Integrity of Information in Healthcare

Quality and Integrity of information is only as good as its solution documentation and resource accountability. The best way to measure info quality is to detail (Zachman style) a failure of the system, figure out accountability and share the understanding of what went wrong and how to fix it. For example, in a healthcare system, when the registration system goes down, presumably it fails over to a hot swap database which is current with account numbering. If the account numbers are not in synch the risk is high for duplicate encounter numbers to be given to patients.

The day of the failure and duplicates goes by, patients are admitted. But, the next day when their bracelets are scanned for medication some patient’s account numbers are rejected because the patient names don’t match. The doctor or nurse looks up a CT Scan by the account number and sees another patient with a different name in the search result. An admin scans a consent form, enters the account number and scans in the form under a different name.

So, what happened here? Why were duplicate account numbers permitted to enter the Electronic Medical Record system in the first place? Well, this is apparently, okay according to the information architecture -- when merging two patients this could be a scenario. What? ECM solutions with systems like Documentum would call this a relation, but would never “merge”, thus erasing the history (and integrity) of information.

After discharge the patient goes home, and the hospital forms are scanned and indexed. When the QA person looks at the actual scanned form, she sees a different patient name. How will this get fixed? The Hospital Information Management department needs to process these patients through coding so the information can get sent to the insurance company and the hospital can get paid for services.

First, what happened? Was the registration faulty failover to blame? It introduced the duplicates and team did not even know it. The EMR team did not know what happened. The integration team did know about it, but didn’t sound the alarms. This issue was exposed by the nurses with medications and admins trying to scan in the front line.

Second, what was done to fix it? Amid quiet desperation, a slow resolution was worked on. No accountability, no consequences. The issue with healthcare information management is that it is patched together by longtime friends who watch each other’s back – this is not isolated to healthcare, obviously. Personal knowledge over sharing is still the preferred mode of operation in most IT shops.

Knowledge Sharing

Documentation, sharing knowledge, is the first hurdle to get over. It is not easy to revamp inventories of applications, what they do and why. Departments like to hold onto the applications that they “own” even if efficiencies and consolidations at the enterprise level makes sense. So one way to force it is the point to an industry regulation and say, “we must do this to comply with the law”, or risk heavy fines. Or, the security and data integrity risks are too great and HIPPA regulations will nail us.


Accountability, having consequences, is paramount to achieving the minimum quality standards of information needed to function without careless mistakes. For example, if the backup and restore process were tested and signed off on by directors do you think the duplicates would have happened? Accountability helps morale of the team(s) in IT as it exposes what happened, who did it, why they did it, and should offer up the best ways to fix the issues. If the director or manager was negligent, figure out why and fix it. If the architect did not follow protocol, call it out for everyone to learn from. This is not a playground here, this is people’s lives, more serious than systems like hedge fund transaction services and yet financial and pharmaceutical systems run more smoothly and with higher quality.


Quality, doing the right thing, is not achievable without knowledge sharing and accountability. Of course, the strategy and goals filtered down through a strong program management structure is vital as well, but I’m focusing just on a few concepts and an outcome that healthcare IT desperately needs to get right, every time.


There is a gap between personal integrity of workers in a hospital and information integrity of the patient's medical record. Hospitals have to balance the costs of both types of integrity. However, it's time to respect and do no harm to information as well as a person's health. It's time to innovate how we