Sunday, May 22, 2016

ECM Naming Convention Checklist

Overview

In general, the naming convention should entail enough descriptive qualities to make it obvious to which group the content belongs. Being able to quickly identify content context is important. The basic building blocks should follow how the organization’s security model is structured. For example, if the company uses Active Directory and assuming it is representative of the security structure, the names usually follow a convention:
(Area)(Department)(Category/Function)((Sub Function))
Not all of the above keys have to have a value. This is a guideline which helps focus the naming of the objects and structures that follow. Keep in mind that exceptions are always part of a naming convention.

Goals

Naming conventions give content a location and relevance tag. It tells the User what it is and where it belongs. By consistently following a convention, the system will be able to scale and still be coherent.

Scope

Each department has its quirks as to how they work. It is important to hone in on how they describe their content with the enterprise in mind. Try not to spend an inordinate amount of time splitting hairs over details. For multipurpose departments, move the structure out to a general level which encompasses all of their responsibilities, then work on describing each path, but keep it simple. Think about how the names will understood by Users who are not privy to your knowledge abbreviations.

Abbreviations

Every organization uses common and idiosyncratic abbreviations depending on its industry. The challenge is to be concise, yet clear. Some abbreviations can be too short and cause confusion. For all naming conventions it is critical to be as concise as possible.

Exceptions

Focus on the core convention and structure and note the exceptions. The exceptions should be handled by metadata or other means.

Assumptions

The main assumption is that the naming convention will be followed and enforced. Each department should be given some control over how they want to describe their content, however, there will be some common structures that will be imposed.

Common Structures

Before creating your own convention, check with other sources of content to make sure there is no convention in place. If there is, compare yours with theirs to see if adopting theirs makes sense. Maybe the first two levels should be incorporated, with the third being what you focus on. The values should be abbreviations if possible and obvious in meaning.

Example

(Area)-(Department)-(Control Number)-(Doc Type Identifier)

Taxonomy

This can be thought of as a way to get to your content. It could be a folder structure, or a cascading categories. Be careful to not go too deep; at some point metadata will take over in describing the nuances of content.

Example

(Area)/(Department)/
                (Specific Level 1)/(Special Level 2)…/
                                (Doc type)
A “Specific Level” could be a functional or category/doc type pair.

Content

Document Types

The naming of doc types should be clear and concise. It should be obvious what department and function they belong to.

Example

(Department)-(Functional/Category Name)-(Doc type name)

File Names

File names become important during normal file exporting and migrations. When content is used outside of the system there should be identifiers that help place the content in context. There could also be a reference back to the system’s numbering system.

Example

(Department)-(Doc type name)-(Content Relevant Identifier (for example Title or Patient MRN))(System Number).(Format)

Title and Headings within Document

The title should be concise, especially if it will be in the filename. It should also reflect any metadata values associated with the document.

Document Information Block within Document

A block of information as a header or footer to a document is a feature of paper-based control documents. In the block you’d have the Title, Doc Number, Doc Date, Doc Affective Date, and so on. This is metadata for the printed page. If there is the need to print a footer with this information still makes sense, but the content within the document should not have this, it should be implemented only during printing or saving to a file outside of the system. The properties of the exported files could also be used to metadata population.

Metadata

System

System metadata already follow a naming convention and usually are proceeded with “SYS” to denote them. The corresponding database tables follow functional naming conventions which are sometime cryptic, but logical.

Dublin Core

The Dublin Core is a common set of metadata of all ECM systems. These include author, doc date, description, name, etc. Don’t duplicate these unless there are naming rules that are different.

Department specific

Keeping track of which metadata is used for which purpose or application can be challenging as the system grows. Naming metadata specific to its purpose and project is advised.

Example

(department)(project name)(metadata name)

References or Relations

If a cross reference is needed, make sure the object naming is consistent with the purpose of the link.

Processing

Business processes or workflows naming need to follow the same naming conventions as all the other preceding objects. It should incorporate the common elements as well as the specific ones.

Publishing

When the ultimate goal of the ECM workflow is to publish to a portal, the naming conventions should follow the same conventions as the portal. Having to map or lookup values may not scale when added to a portal’s level of use.

Security

As mentioned above, the security hierarchy and its naming convention are a first indication of how well organized the company’s structure is. The naming of the groups should be considered when thinking about the naming of projects, folders, and doc types.

Compliance

Many systems have to comply with regulations like 21CFR part 11. This type of scrutiny applies to the information architecture and system’s content.

Regulations

Outside regulation bodies could impose certain naming conventions which need to be followed.

Audits

Auditors need to be able to ask for information in general with an understanding that you will know what they need. It’s vital to only search for what they focusing on, and this requires a good naming convention and robust metadata.

I18n

Introducing a foreign language to the system multiplies the complexity of the solution. Not only are the metadata values multiplied, the naming conventions are multiplied as well. Most ECM systems can accommodate this, however, folder names and metadata might have to the duplicated with the foreign values.

Search

Search is only as good as the metadata value quality and full text indexing comprehensiveness.

Change Management

As organizations change, areas and departments get moved around and new names are designated.

Regression Mapping

Keeping track of structural changes can be challenging. Depending on the scale of the change, creating a map to previous taxonomies or department names can be helpful.


Thursday, May 5, 2016

Obscured by ECM Clouds

If your CTO says it is “impossible” for the hyperconverged cloud to go down, you know you and everyone else will be in for a long night at some point during the cloud's stabilizing period. Nothing is infallible, not even the cloud. If you are pushing the technology edge, then you need to own up to the inevitability of a confluence of issues. So you have to ask yourself, “What steps would have to be skipped, or overlooked, during the design, development, and implementation of a cloud system to get to the point of an emergency downtime of your fool proof network?”

Hypothetically, let’s say one bug in the software could blue screen all of the domain controllers in every redundant location at the same time. There are a few points to consider when reviewing this type of failure:

The inexperience of those in control at the technical and the blind faith managerial level

With new technology even the experts make mistakes. When the outage happens, are the persons caught in the headlights fully trained and part of the initial design and development, or are they the “B support team”? This is a critical mistake made over and over again, by IT leadership and financial stewards, where it is deemed okay to bring in experienced consultants to design and implement a new technology solution and then leave it to the less experienced support to team to maintain and upgrade, without proper training and onsite support.

Lack of resources to provide an acceptable factor of safety

In the rush to curtail costs, the system suffers. The “secure and agile IT services” cloud is not a one off capital expense. Cutting operational costs too drastically will show its shortcomings in emergency outages and other incidents over time. As with any system, the change must be methodical with a factor of safety that is understood by all business partners. It’s no excuse to cut corners because there’s “no budget.” Try saying that to a surgeon.

Make sure someone is always accountable

In many cases, the business is cajoled into taking what IT says for granted, but when the system goes down they might be surprised to find out that no one is ultimately held accountable. “Virtualizing and hyperconverging its data center” also could end up virtualizing the accountability of the system, which in turn means that a Root Cause Analysis will never fully explain what really happened, if it ever gets sent out…


Lack of decoupled, identical Test environment

If your company cannot afford a decoupled test environment that mimics the cloud set up, it is adding risk to the implementation. The vendor should at least provide a comparable test environment to test bug fixes and service packs. If you had this and the outage still occurred, this points to the infrastructure team, their manager, their director, and ultimately their CIO.

Cognitive Bias toward “If it runs, don’t upgrade”

There can be a bias with some CTOs to only fix bugs with bug patches, and to never upgrade the virtualization system software unless the infrastructure requires it. In the end, “hyperconvergence” is a term that is meant for theoretical analysis, not ROI, because the hidden costs of implementing this new technology are everywhere, you just have to know where to look. Also, the risks for implementing an internal cloud are greater than going with the established, large cloud services.


Monday, April 25, 2016

ECM Maturity Markers

The Bell curve graph above is meant as a rough visual representation of the maturity of ECM. On the Y-Axis is ROI, on the X-Axis is Time. Some companies have more of a Long Tail graph. Some have a straight line that never achieves ROI.

Marker #1 (year 1)

Discovery and Roadmap Development

The business is interview, given the heads up that change is coming.

Driving Force

Sets the stage for how the whole rollout will progress
Ex: Finance vs. HIM: the first project usually gets the most attention, the onsite vendor team, the full budget request, the flagship presence.

Scoping

The project with the most bang for the buck is chosen and developed. The focus is on this project, regardless of the impact on other teams and systems.

ROI measure

It’s absolutely critical to get baseline measurements of the current processes. Measurements that highlight the FTEs involved with pushing paper, referencing multiple systems, etc.

First project Implementation

Usually the ECM vendor or associated consulting group installs, configures and deploys the first implementation with a lot of fanfare. It might even get mentioned by the CIO.

Turnover

Within a year, the initial consultant(s)/employee(s) will leave project or company. They are there to feed off the initial expense budget and leave to get on another budget train.

Marker #2 (year 1-5)

Execute Roadmap

This is when the ECM manager attempts to execute as much of the ECM Roadmap as possible. As the ROI is realized, there are second and third waves of getting the most out of the investment. Licensing costs be scrutinized. Upgrades are delayed.

Propagation

In some cases, the roadmap is rewritten to expand the initial vision of ECM, in others it is reduced. If the ROI goal proves hard to reach, more projects may get started to reach it.

Stop Gaps

ECM suites are capable of serving as stop gap solutions to many different areas. As other systems labor on with obsolete technology, ECM can sweep in and save the day. As older systems are replaced, the documents and images need a relatively inexpensive place to be stored for retention. This is perfect for ECM.

Integration

All possible integrations are added to the solution that stream line the indexing and processing of the incoming content.

Marker #3 (year 5-8)

Commodity

ECM is fully mature; its expansion is over. The original team is smaller. Most of the maintenance is routine.

Maintenance

The biggest issues are scaling for storage and performance. In large ECM systems this can be more and more of an issue. The original Roadmap usually doesn’t include plans for splitting up the repository that is now huge.

Cloud

At some point, a director will make the case to move the system to the cloud. Because it is huge, the argument to move it might be a good one.

Marker #4 (year 8 to XX)

Superseded by Better Technology

There will be a time when the shiny, new system becomes old and obsolete. The need to simplify and break apart the system becomes a necessity for survival . Technical advancements will become too glaring to ignore or workaround. Migration projects might start up to gradually dismantle the solution piece to move them into more modern systems.

Life Support


As new technology pressures increase, it may be an option to put the system on life support and left for retention only. 

Friday, April 8, 2016

9 Ways to Mitigate Risks of ECM Upgrades

We all know ECM upgrades can be challenging, we also know that in a perfect world there would be no hiccups. When does this ever happen? Some of the worse issues can explode when an upgrade is “supposed to have no impact” on other systems.

In Mike Bourn’s blog article, ECM System Production Deployments – What Could Go Wrong?, he describes the ways to mitigate the “busted weekend”. He describes the infrastructure and expertise requirements to mitigate risk. I’d like to add a few others:



1. Frequency There’s a reason that cloud services is supplanting onsite installation beyond cost: maintenance. It’s a way to have other do upgrades every year without the hassles of begging for resources. If you are still have your ECM system in-house, great! Keeping up with the version service pack 2 release upgrades is ideal for mitigating the various infrastructure and database risks.
2. Number of environments
The number of times you practice the steps involved with an upgrade is directly related to its success. 
3. Using Prod data
I can’t emphasize enough the importance of refreshing your environments with production data. The majority of content files don’t necessarily have to be copied. The issue is mainly around the potential issues with data quality and upgrade duration measurements.
4. Type of Documentation
Write documentation as if you have been up for 24 hours and are stress out. Create lists and take screenshots. Make believe you are dumb and need step by step instructions, don’t cut corners. Also, don’t blow off any install messages: these can blow up upgrades.
5. Reliance on outside services
It’s fine to have tech support on a support retainer for the cutover weekend, but I would try to do all of the actual pre work, testing, and troubleshoot. This is one of the best chances to learn more about your system, to gain an in-depth understanding beyond what the manuals or basic maintenance can offer.
6. Empowering the right individuals
Try to include as many individuals in on the upgrade procedures. Distribute the troubleshooting tasks among them. 
7. Testing Efficiently
Regression testing can get so bogged down in the weeds that larger issues are missed entirely. Try to design a multifaceted strategy that balances the number of tests with the likelihood that it would be affected by the upgrade.
8. Diversity of User’s OS and Browsers
Please don’t forget that the User base is not like you. Their PCs may not have been included in the last round of OS upgrades or patches. Their web browsers or java versions may be configured with different options. Their PDF viewer may be old. 
9. Ready to Go
If you have a multiple person team, keep one home and rested, ready to come in on the following day to deal with the post upgrade issues because there will always be some.

There are infinity ways an upgrade weekend can go bust. By drilling into each of the above areas, you should mitigate many of potential hiccups. If you are lucky, the issues will be minimal. You will be able to rest on Sunday before going back in for more on Monday. 

Monday, March 21, 2016

Invaluable Individual Contributors

"These people are the highly professional individual contributors.  In many cases they have deliberately chosen not to pursue a managerial career, preferring technical work or wanting to avoid the duties associated with being a manager, including budgets, reports, endless meetings and the never-ending people issues... Nearly everything they accomplish they do through influence, because they usually lack any formal 'role power'.” Jack Zenger, Forbes *
We all know the individual contributors in our departments or organization who are invaluable. That's the problem, when the leave or retire, they will take a huge amount of knowledge with them. 
Now is the time to shadow them, to write everything down, fully understand how their mind works, how they troubleshoot. If you don't, you might as well budget for 2-3 more positions to compensate. Plus, that great service level agreement that worked great? Forget about it.
This individual is more than herself, her connections and the accumulated trust they have in her, is part of the whole position she held. The invisible dotted lines to her need to be understood.

Documentation standards

Ok, your organization has good documentation standards, but what about the assumed or taken for granted activities that are done to get things done. How do you document respect and trust? 

The underlying climate

At the doer level there are always gripes people have with the process of getting work done. What are these issues? If you have invaluable workers, then you have issues with this process.

Mandatory shadowing / agile techniques

You could cross train your team, shadow the invaluable with the newbies. This gets you part of the way there. This does not negate the need for formal training. Each indidual will still thrive at what they do best, not what/how was done previously.

Leaving a void

The invaluable compensate for broken processes by frequently "saving" the day. They mask any issues with project management by patching the issues as soon as they occur. They serve as backup when other folks can't figure out what to do. They are victims of there own success in that they enable a less structured approach to documenting requirements, specifications, schedules, and processing. And when they leave, good luck filling their void.

Monday, March 14, 2016

Browsers Ride the Upgrade Wave

Many times after implementing an ECM solution, or upgrading, Users come out of the woodwork and complain about issues using the system. Wait, the testing phase took a month and was meticulous. Did you fully inventory the User base? Did the Windows group really give you all of the possible Web browser apps and versions which access the site?

Detection Tools

Webserver Logs

Analyzing the current web server logs should reveal the web browser spectrum. For a certain amount of time before analysis, the logging configuration may have to be changed to increase the details.

Traffic sniffers

Wireshark or Fiddler can be used to detect http traffic into the web server. Details of web browsers can be gleaned from these logs.

Full disclosure beyond CYA

There are always pockets of non compliance in your organization. Even if you have identified them, they may push back with reasons why they can't upgrade their web browser, usually because they work with a system that needs to be upgraded as well.

Upgrade Wave

Chances are good that your IT review board does not orchestrate all system upgrades to web browser type and version. They typically pay attention to the most expensive and complex, leaving the ancillary applications to fend for themselves. Wouldn't it make sense for all applications to be orchestrated at the User level first? That is to list out all systems based on browser compatibility?

Before Interoperability

Of course, one of the key aspirations of "interoperability" of large systems is mapping and synchronization data, however, this can't happen without seamless coordination of web browser types and version to assure User access to the information.

Thursday, March 3, 2016

Using BPM to Assure Information Quality

I know, there is no such thing as 100% quality, but we need to challenge ourselves to get there. To catch potential data quality issues, you’ll need to create a set of validations, added to a process that will identify and fix them.  This set of validations can be automated within a workflow, identifying rules and actions to perform given certain conditions. Finding and fixing issues is an ongoing task: it requires a balance of vigilance and curiosity, as well as caring. Usually issues come about because there is spotty accountability somewhere in the flow of information from the source to the downstream systems.

Goal

Close any data integrity gaps by applying validation checks and fixes.

How does this happen?

It can happen very gradually and surreptitiously. Within a company, unless there’s a strong information quality department, there will inevitably be data inconsistencies because each department has different priorities and validation requirements. All it takes is one form in one application with lax data input requirements. It could also be a lack of validation check during data input. Lastly, when software is upgraded or data is merged from one system to another, we assume wrongly that one source data is fully vetted.

Examples of how this happens

Downtime

Let’s say your system has an account lookup feature, but that feature is down, so you have to enter in the account information manually. The feature is fixed in a few hours, but by then you’ve entered 100 accounts. Does this data get validated later? If it doesn’t, a downstream application could have quality issues.

Patient information merges and updates

Lets’ say you work at a hospital. There’s a patient referral with the same medical record name (MRN) as an existing patient with the same birth date. The referral is entered in at the existing patient. This error is caught later and the patient information is fixed, but did the patient already get treated?

Towards Quality with Process Automation

By inserting workflows into the process, specific types of data inconsistencies can be identified, investigated and resolved. Below are some general design components for building a quality validation workflow:
·         Figure out how to funnel all data/content through the validation workflow. Using by doc type or input sources, the information can be collected and filtered as appropriate.

·         Create the rules to route issues into buckets. Here are some typical queues:
o   Routing: this queue has validation checks which compare metadata values against source of record values.
o   Issues queues: these correspond to the common issues that get identified.
o   Routing issues: this queue holds any doc that doesn’t match the issues queues.


·         These buckets can be evaluated during their initial manual fixes for potential automated solution, and identifying upstream, root cause, data issues.