Monday, March 21, 2016

Invaluable Individual Contributors

"These people are the highly professional individual contributors.  In many cases they have deliberately chosen not to pursue a managerial career, preferring technical work or wanting to avoid the duties associated with being a manager, including budgets, reports, endless meetings and the never-ending people issues... Nearly everything they accomplish they do through influence, because they usually lack any formal 'role power'.” Jack Zenger, Forbes *
We all know the individual contributors in our departments or organization who are invaluable. That's the problem, when the leave or retire, they will take a huge amount of knowledge with them. 
Now is the time to shadow them, to write everything down, fully understand how their mind works, how they troubleshoot. If you don't, you might as well budget for 2-3 more positions to compensate. Plus, that great service level agreement that worked great? Forget about it.
This individual is more than herself, her connections and the accumulated trust they have in her, is part of the whole position she held. The invisible dotted lines to her need to be understood.

Documentation standards

Ok, your organization has good documentation standards, but what about the assumed or taken for granted activities that are done to get things done. How do you document respect and trust? 

The underlying climate

At the doer level there are always gripes people have with the process of getting work done. What are these issues? If you have invaluable workers, then you have issues with this process.

Mandatory shadowing / agile techniques

You could cross train your team, shadow the invaluable with the newbies. This gets you part of the way there. This does not negate the need for formal training. Each indidual will still thrive at what they do best, not what/how was done previously.

Leaving a void

The invaluable compensate for broken processes by frequently "saving" the day. They mask any issues with project management by patching the issues as soon as they occur. They serve as backup when other folks can't figure out what to do. They are victims of there own success in that they enable a less structured approach to documenting requirements, specifications, schedules, and processing. And when they leave, good luck filling their void.

Monday, March 14, 2016

Browsers Ride the Upgrade Wave

Many times after implementing an ECM solution, or upgrading, Users come out of the woodwork and complain about issues using the system. Wait, the testing phase took a month and was meticulous. Did you fully inventory the User base? Did the Windows group really give you all of the possible Web browser apps and versions which access the site?

Detection Tools

Webserver Logs

Analyzing the current web server logs should reveal the web browser spectrum. For a certain amount of time before analysis, the logging configuration may have to be changed to increase the details.

Traffic sniffers

Wireshark or Fiddler can be used to detect http traffic into the web server. Details of web browsers can be gleaned from these logs.

Full disclosure beyond CYA

There are always pockets of non compliance in your organization. Even if you have identified them, they may push back with reasons why they can't upgrade their web browser, usually because they work with a system that needs to be upgraded as well.

Upgrade Wave

Chances are good that your IT review board does not orchestrate all system upgrades to web browser type and version. They typically pay attention to the most expensive and complex, leaving the ancillary applications to fend for themselves. Wouldn't it make sense for all applications to be orchestrated at the User level first? That is to list out all systems based on browser compatibility?

Before Interoperability

Of course, one of the key aspirations of "interoperability" of large systems is mapping and synchronization data, however, this can't happen without seamless coordination of web browser types and version to assure User access to the information.

Thursday, March 3, 2016

Using BPM to Assure Information Quality

I know, there is no such thing as 100% quality, but we need to challenge ourselves to get there. To catch potential data quality issues, you’ll need to create a set of validations, added to a process that will identify and fix them.  This set of validations can be automated within a workflow, identifying rules and actions to perform given certain conditions. Finding and fixing issues is an ongoing task: it requires a balance of vigilance and curiosity, as well as caring. Usually issues come about because there is spotty accountability somewhere in the flow of information from the source to the downstream systems.


Close any data integrity gaps by applying validation checks and fixes.

How does this happen?

It can happen very gradually and surreptitiously. Within a company, unless there’s a strong information quality department, there will inevitably be data inconsistencies because each department has different priorities and validation requirements. All it takes is one form in one application with lax data input requirements. It could also be a lack of validation check during data input. Lastly, when software is upgraded or data is merged from one system to another, we assume wrongly that one source data is fully vetted.

Examples of how this happens


Let’s say your system has an account lookup feature, but that feature is down, so you have to enter in the account information manually. The feature is fixed in a few hours, but by then you’ve entered 100 accounts. Does this data get validated later? If it doesn’t, a downstream application could have quality issues.

Patient information merges and updates

Lets’ say you work at a hospital. There’s a patient referral with the same medical record name (MRN) as an existing patient with the same birth date. The referral is entered in at the existing patient. This error is caught later and the patient information is fixed, but did the patient already get treated?

Towards Quality with Process Automation

By inserting workflows into the process, specific types of data inconsistencies can be identified, investigated and resolved. Below are some general design components for building a quality validation workflow:
·         Figure out how to funnel all data/content through the validation workflow. Using by doc type or input sources, the information can be collected and filtered as appropriate.

·         Create the rules to route issues into buckets. Here are some typical queues:
o   Routing: this queue has validation checks which compare metadata values against source of record values.
o   Issues queues: these correspond to the common issues that get identified.
o   Routing issues: this queue holds any doc that doesn’t match the issues queues.

·         These buckets can be evaluated during their initial manual fixes for potential automated solution, and identifying upstream, root cause, data issues.