Tuesday, September 20, 2016

OpenText and Documentum and Content

Who would have thunk that these two ECM giants would be under the same roof? OpenText is obviously paying a premium for Documentum for its clients and solution services. The dashboard level of this deal must look excellent, however the details of the next 5 years of execution will be interesting. The media’s favorite go to pundit for reactions to Documentum events, John Newton (cofounder of Documentum), says “the product lines will wither away” amid neglect from OpenText in favor of their own ECM solutions. This sounds about right, but maybe OpenText has higher goals beyond the software and solutions, toward the content and process.

It’s not about loyalty

Some folks characterize Documentum’s customers as being loyal. I don’t buy it. Most of the original proponents of Documentum in these companies have moved on to different positions. This is legacy software now in many large companies who’ve had it installed for over a decade. It’s a matter of risk analysis now, not loyalty. Point or more integrated solutions are stepping in and sweeping away the new projects.

Depends on the Industry

Of course, Documentum has its tentacles more into certain industries than others. For example, pharmaceutical companies have relied on custom solutions from Documentum for a while now. With 21CRF regulations controlling content, the move to any other solution will be extremely painful. On the other hand, OpenText has its share of pharma clients as well. Maybe the strategy is to hone in on the huge money maker industries only and dump the rest?

Storage

I’m not sure either platform will supersede the other. For years EMC threw in Documentum as a bonus to its storage solutions. What does this mean? Well, at the metadata architecture and storage level it means that not only OOTB applications, but custom apps and integrations are dependent on the storage addresses. This will be no small feat to migrate to OpenText. I believe Documentum was slowly sinking into EMC’s storage hole anyways. My experience migrating AppExtender files to OnBase is a good example: the content was in blobs with links for Cerner to them. We had to leave the links in place and move the underlying content pointers. We lucked out with this approach, but it left me wondering how other companies deal with a more comprehensive migration.

Older, Large Documentum installs

I have seen multi-tiered, multi-year upgrade and migration plan for large installs of Documentum. The scale of internal Documentum knowledge needed to pull this off, year after year, is daunting. To ask these resources to then switch to a different ECM solution requires years of preparation and training. I worked for EMC as they were implementing Documentum solutions across their enterprise and it was painful even though EMC owned them!

Upgrade pain level

Having worked on both Documentum and OpenText solutions, I can tell you that both of these platforms are difficult to upgrade for different reasons:

OpenText patches and steps and bugs
At a pharma company, I worked on upgrading OpenText for a year and the project was postponed because our upgrade process was not complete and it still took too long to fit into the downtime outage window. This type of story sometimes happens with all large software installs, however the amount of kludgy patches that had to be applied during the upgrade was disconcerting. The steps of the upgrade for both the code base as well as the database changes were way more complicated than other packages. This software upgrade process can make you understand why some CTOs do not advocate upgrading anything unless it is faltering.

Documentum customizations and changes
Most Documentum installs have some or extensive customizations based on the WDK and DFC development. These APIs were excellent at the time. Over time, methods get deprecated and maintenance gets more expensive. Then, after a few years, you have upgrade. This can be a daunting process, even when it’s you who knows the software inside and out. Finding documentation which is specific to the development and changes is probably buried inside the code. This means you have to understand the older and newer APIs. Adding a migration to OpenText on top of this is a nightmare.

Content is King


Coming a full circle on understanding what is most important in the ECM game, maybe the unstructured and structured content itself (and mining it) is the most valuable asset after all. In the next decade, we’ll see. 

Tuesday, September 13, 2016

Big Data Delays the Inevitable

Just because companies have cheaper storage hardware and more computing power it doesn’t mean they have to spend their savings on Big Data solutions. Most IT shops know their content better than their CIO thinks they do, plus it’s sexy to boast of a Big Data search engine applying algorithms to discover more revenue opportunities. For example, finding potential ICD10 billable work in doctor’s notes at a hospital.

But, aren't most Big Data solutions masking poor information management of the past? Is it really trying to help create better tools and quality control for potential negligent information processing?

In CIO (March 16, 2016) “Is Enterprise content management becoming obsolete and irrelevant?” article, Mitch De Felice builds on the common lament of AI and Big Data marketing that most data is unstructured. Unstructured by whose definition? I’ve seem search algorithms be applied to very structure databases to mine for gold. Some algorithms are benevolent, others are malignant. Some search for patterns and breakthroughs, others for revenue opportunities.

Mitch says, “ECM vendors need to shift their view from data storage to knowledge management.” This has been happening for 30 years. It’s not easy to squeeze knowledge out of “just good enough” data entry. ECM vendors have from day one been offering workflow, indexing, relationships between content and metadata, etc. It’s more fruitful to why “knowledge workers” continue to drag their feet and purchase only the bare minimum of ECM modules.

Inevitably, multiple waves of information management process overhaul will happen. These waves will force much more structure underneath the software tools. Big Data is the option now because most companies are stuck with poor information QC on top of applications that are expensive and difficult to change quickly, let alone retraining Users to provide more knowledge.

Thursday, August 25, 2016

What is Your Company’s ECM Narrative?

Does your ECM system have a story title like, “Doc Repo” or “Approval Center”, or is it called by its vendor name? Even a generic name is better than the vendor’s name. A unique name gives the solution an identity. This identity could help expand the solution to other areas of the company. Some companies adopt an IT pet name for their systems and solutions, but many others just call their HR system, “PeopleSoft”, or their ERM system, “AllScripts”, etc. These names just promote the vendor, not the unique solution.

IT Management

Is management creative with IT applications? Is your Director a story teller? Although ECM can survive on the ROI story, it could thrive on stories of larger scale and transformation. If the core driver of the solution’s interests and energy is apparent, why not tell a story around that?

Mission Statement

These statements are created and posted on the wall. They can be inspirational like Google’s, or more down to earth. Regardless, the naming of the IT solutions should reflect the tenants of the company’s mission.

Try to come up with for your ECM solution

First, is there a story of the solution which everyone can relate to? If the majority of the content is based around patient information at a hospital, shouldn’t the story of the system reflect that? For example, “Patient Care Depot.” I know, this isn’t very good, but you get the idea. With a solution name comes the reasons why there is a solution and it’s benefits, not only for IT workers, but for Users and the population or problems it solves.

Second, You’ll need to “balance the flexibility required for innovation with the routinization needed for ongoing operations.” (Tushman and O’Reilly 1996). You can’t create a completely new story which makes no sense or is too obtuse. The purpose of the system name and stories is to bring disparate, silo oriented, groups together with a common understanding of the solution.

The role of translation

The translation of an ECM vision to the specific company’s unique processes and culture can be challenging. A translation implies ‘displacement, drive, invention, mediation—the creation of a link that did not exist before and that to some degree modifies two elements or agents” (Latour 1994, p. 32). With a story, the translation of the system’s benefits can be made more relevant to the varied groups: Accounting has different perspectives than Marketing, but they both need to manage their content.

Transformation

A narrative needs to be transformative:  “Transformational innovation requires offering or doing something fundamentally different; a metamorphosis most organizations don’t excel at. Such innovation is disruptive because it introduces products and services that change the business landscape by providing a dramatically different value proposition. And championing transformational innovation involves going to war with all the elements inside an organization that benefit from the status quo.” (Stephen Denning)

There’s always a beginning, middle, and end of stories. The ECM solution is the middle of an ongoing story. It doesn’t end, it just morphs into another solution and narrative, like Star Wars.

Wednesday, July 27, 2016

Where large company CMS and ECM solutions Intersect

Wouldn’t it be great if your company could combine the parallel efforts involved with development and deploying internet/intranet/portal websites and document management/workflow systems? Below are some intersections that might help with synergies.

Expectations

Have your integration requirements clearly stated, understanding that there will be performance, security, and functional limitations.

Intersections

Easy

Links to document management application
·         This involves publishing links to documents (with some descriptive metadata) from a backend ECM solution to the Portal for presentation. This link serves up the document in the User’s client, usually in its native format and application.

New browser window from portal link
·         Link from Portal to ECM web solution (assuming SSO is setup) and new window frame. This would allow full access to the functionality and breadth of the underlying application.

     Moderate

CMIS
·         If this is offered, it could be a way to perform basic import/export operations with documents.

Web services
·         Larger ECM solutions will offer web services access. The question is to what extent. Short are building your own custom services using the native API, this could provide enough integration.

     Difficult

Search
·         The scoping of the search criteria, or limiting the search by doc type/metadata values across solutions.
·         Complex search, such as below, are difficult to integrate at the portal level, unless there’s a solid integration between the systems.
o   Fuzzy
o   Sounds like
o   Term proximity
·         Searching indexes and presenting results within a single portal can pose many issues around performance and access control.

     Round About

Through a two-step publishing process
May your CMS has out-of-box connectors to the most common business ECM solutions, like Sharepoint. This could open an opportunity to use Sharepoint as a surrogate repository, where both solutions connect to it, offering up common functionality.

“The Vendor Said”


A vendor’s hyped up solution offering of 5 years ago, might be almost forgotten now. This could mean that, although your product offers a module for CMIS, they implementation and support has waned in the past. 

Wednesday, June 15, 2016

ECM Upgrade Best Practice

Frequency

Because most ECM systems have a regular yearly release, it is ideal to realize the benefits of upgrading every year. This will help with keeping up with OS and database upgrades as well.

SP2

In general, upgrades to enterprise software should wait for Service Pack 2 of the current release. This allows for initial release bugs to be fixed and stabilized. For OnBase, this usually puts the upgrade window in the third quarter of any given year.

Budget

If an upgrade is planned for every year, the budget line item will smaller and more digestible by management.

Schedule

An upgrade schedule should be regular and expected. The more regular, the easier and less expensive it is. The resources involved with not being trying to remember all the steps from scratch again. The steps themselves will be more up-to-date.

IE, OS and Database

At the enterprise level, you don’t want OnBase slowing down IE releases, or database upgrades. It makes sense to keep up.

Environments

With VMs and multiple environments, upgrade steps can be written and tested many times before the actual upgrade. This should help minimize risk for each upgrade cycle.

Enterprise system upgrade and implementation schedules

It’s important to get on the upgrade train each year (or two) as early as possible. There will always be larger, more important initiatives that will bump OnBase upgrades, but at least you’ll be on the list.
 
When new solutions are being developed, the requirement to be using the latest possible version of software is much more important. Even one year can make huge difference between solution offerings.

Sample Upgrade Matrix

There are concerns and potential issues when it comes to planning and executing upgrades. As the scale and complexity of the solution increases, so does the upgrade matrix.
 


















































Concern

Solution Details

Impact on Solution

 

Impact on Users

 

Environment Setup

 

Integration

 

Complexity

 

Cutover Approach

 

Downtime requirements

 

Testing

 

Upgrade Risk

 

Risk of Waiting

 

Resources

 

Sunday, June 5, 2016

"The future is already here..."

“The future is already here — it's just not very evenly distributed.”
-- William Gibson

This quote, when applied to ECM, opens up doors of understanding. To know how solutions start and propagate throughout an organization, you must first start with the group most interested in the future, and why they are pushing IT in that direction.

Sometimes, it’s the promise of ROI, or getting rid of paper, or access to information outside of the secure network. Whatever the vision, the solution (the future) is not distributed all at once. ECM is a message; a movement that is planted, then it spawns to other departments. The success of the distribution is dependent on many factors.

Beginnings

The big bang vs. gradual implementation is always a discussion point when any expensive software is considered. Scoping the implementation just right is essential: the delivery date will slip if the scope of the project is too broad, however the project’s influence will suffer if the scope is too small. First implementations are politically charged. There are managers who feel slighted, disagreements in hallways, new alliances that strain old ones.

Follow Ups

It’s tempting to copy the first implementation with the same formula, the same business requirement steps, the same functional specs, and so on. Be cognizant of this. Each business process is different enough to warrant different approaches. Follow ups should not be delayed; they need to progress until the scope of the initial vision is complete.  For example, all of the incoming orders that were on paper are now scanned and indexed, in every branch. Automation of this piece is complete

Realizing the next phase


Every future has another one on its tails. By the time the final scanner is in place, a new vision is hatching. For example, a director wants to fix a broken process where invoices are getting lost; or there’s an information quality issue with the way the scanned orders are getting indexed. All solutions introduce new issues and therefore new solutions. Innovation never ends, only the sales pitch does. 

Sunday, May 29, 2016

The Role of Licensing in ECM

Too many times the potential of your ECM rollout gets sidelined, or delayed because of budgetary issues. The right people have the right skills, the infrastructure is in place, but the needed module license was not in the budget. If you are lucky the management is onboard and willing to take the flack for asking for extra money, however this is unlikely.

Initial rollout 

ECM systems have a lifecycle just like all other software solutions. It could be that a new director is hired and she wants Hyland OnBase instead of Documentum for scanning. If you are in this type of situation where a new system is in the budget plan, then get to work with that 5 year plan.

ROI

Any project that includes converting paper or processes into a software solution has plenty of ROI, you just have to know how to calculate it. This is essential to justify funding the solution’s full lifecycle of licenses. Get as many diverse measurements of costs and savings as possible. Too little attention is given to this. All you have to do is talk to the people who are pushing the paper: how much time is spent trying to find stuff, what kind of decisions wait for this, what is this time worth? And so on…

Get it all now or forever hold your peace

It's easier to buy all the modules up front as one large budget, than it is to piece meal them in later. A module for $15k is nothing in an overall expense of $500k, however, two years later, $15k may not get approved as a standalone budget item. The shine of the new ECM solution wanes as the years go by. The big ROI has passed, it’s just harder to justify. If possible, negotiating all of the enterprise licensing up front would be a better strategy than waiting for buy-in a few years later.

Multiyear change in direction of a strategy

If the nature of the solution changes, it's easier to exchange modules, than it is to buy new ones. Let’s say that one module was never implemented, but a new mobile module is part of what your CIO wants. Exchanging licenses could be easier and quicker than asking for the extra, emergency budget request.

API and integration modules


Purchase the API licenses all at once. They are usually less expensive than the packaged solutions, but can be very useful as the solution matures and integration is revisited. Build in flexibility wherever possible, this means have the option to create minor customization when needed.

Sunday, May 22, 2016

ECM Naming Convention Checklist

Overview

In general, the naming convention should entail enough descriptive qualities to make it obvious to which group the content belongs. Being able to quickly identify content context is important. The basic building blocks should follow how the organization’s security model is structured. For example, if the company uses Active Directory and assuming it is representative of the security structure, the names usually follow a convention:
(Area)(Department)(Category/Function)((Sub Function))
Not all of the above keys have to have a value. This is a guideline which helps focus the naming of the objects and structures that follow. Keep in mind that exceptions are always part of a naming convention.

Goals

Naming conventions give content a location and relevance tag. It tells the User what it is and where it belongs. By consistently following a convention, the system will be able to scale and still be coherent.

Scope

Each department has its quirks as to how they work. It is important to hone in on how they describe their content with the enterprise in mind. Try not to spend an inordinate amount of time splitting hairs over details. For multipurpose departments, move the structure out to a general level which encompasses all of their responsibilities, then work on describing each path, but keep it simple. Think about how the names will understood by Users who are not privy to your knowledge abbreviations.

Abbreviations

Every organization uses common and idiosyncratic abbreviations depending on its industry. The challenge is to be concise, yet clear. Some abbreviations can be too short and cause confusion. For all naming conventions it is critical to be as concise as possible.

Exceptions

Focus on the core convention and structure and note the exceptions. The exceptions should be handled by metadata or other means.

Assumptions

The main assumption is that the naming convention will be followed and enforced. Each department should be given some control over how they want to describe their content, however, there will be some common structures that will be imposed.

Common Structures

Before creating your own convention, check with other sources of content to make sure there is no convention in place. If there is, compare yours with theirs to see if adopting theirs makes sense. Maybe the first two levels should be incorporated, with the third being what you focus on. The values should be abbreviations if possible and obvious in meaning.

Example

(Area)-(Department)-(Control Number)-(Doc Type Identifier)

Taxonomy

This can be thought of as a way to get to your content. It could be a folder structure, or a cascading categories. Be careful to not go too deep; at some point metadata will take over in describing the nuances of content.

Example

(Area)/(Department)/
                (Specific Level 1)/(Special Level 2)…/
                                (Doc type)
A “Specific Level” could be a functional or category/doc type pair.

Content

Document Types

The naming of doc types should be clear and concise. It should be obvious what department and function they belong to.

Example

(Department)-(Functional/Category Name)-(Doc type name)

File Names

File names become important during normal file exporting and migrations. When content is used outside of the system there should be identifiers that help place the content in context. There could also be a reference back to the system’s numbering system.

Example

(Department)-(Doc type name)-(Content Relevant Identifier (for example Title or Patient MRN))(System Number).(Format)

Title and Headings within Document

The title should be concise, especially if it will be in the filename. It should also reflect any metadata values associated with the document.

Document Information Block within Document

A block of information as a header or footer to a document is a feature of paper-based control documents. In the block you’d have the Title, Doc Number, Doc Date, Doc Affective Date, and so on. This is metadata for the printed page. If there is the need to print a footer with this information still makes sense, but the content within the document should not have this, it should be implemented only during printing or saving to a file outside of the system. The properties of the exported files could also be used to metadata population.

Metadata

System

System metadata already follow a naming convention and usually are proceeded with “SYS” to denote them. The corresponding database tables follow functional naming conventions which are sometime cryptic, but logical.

Dublin Core

The Dublin Core is a common set of metadata of all ECM systems. These include author, doc date, description, name, etc. Don’t duplicate these unless there are naming rules that are different.

Department specific

Keeping track of which metadata is used for which purpose or application can be challenging as the system grows. Naming metadata specific to its purpose and project is advised.

Example

(department)(project name)(metadata name)

References or Relations

If a cross reference is needed, make sure the object naming is consistent with the purpose of the link.

Processing

Business processes or workflows naming need to follow the same naming conventions as all the other preceding objects. It should incorporate the common elements as well as the specific ones.

Publishing

When the ultimate goal of the ECM workflow is to publish to a portal, the naming conventions should follow the same conventions as the portal. Having to map or lookup values may not scale when added to a portal’s level of use.

Security

As mentioned above, the security hierarchy and its naming convention are a first indication of how well organized the company’s structure is. The naming of the groups should be considered when thinking about the naming of projects, folders, and doc types.

Compliance

Many systems have to comply with regulations like 21CFR part 11. This type of scrutiny applies to the information architecture and system’s content.

Regulations

Outside regulation bodies could impose certain naming conventions which need to be followed.

Audits

Auditors need to be able to ask for information in general with an understanding that you will know what they need. It’s vital to only search for what they focusing on, and this requires a good naming convention and robust metadata.

I18n

Introducing a foreign language to the system multiplies the complexity of the solution. Not only are the metadata values multiplied, the naming conventions are multiplied as well. Most ECM systems can accommodate this, however, folder names and metadata might have to the duplicated with the foreign values.

Search

Search is only as good as the metadata value quality and full text indexing comprehensiveness.

Change Management

As organizations change, areas and departments get moved around and new names are designated.

Regression Mapping

Keeping track of structural changes can be challenging. Depending on the scale of the change, creating a map to previous taxonomies or department names can be helpful.


Thursday, May 5, 2016

Obscured by ECM Clouds

If your CTO says it is “impossible” for the hyperconverged cloud to go down, you know you and everyone else will be in for a long night at some point during the cloud's stabilizing period. Nothing is infallible, not even the cloud. If you are pushing the technology edge, then you need to own up to the inevitability of a confluence of issues. So you have to ask yourself, “What steps would have to be skipped, or overlooked, during the design, development, and implementation of a cloud system to get to the point of an emergency downtime of your fool proof network?”

Hypothetically, let’s say one bug in the software could blue screen all of the domain controllers in every redundant location at the same time. There are a few points to consider when reviewing this type of failure:

The inexperience of those in control at the technical and the blind faith managerial level

With new technology even the experts make mistakes. When the outage happens, are the persons caught in the headlights fully trained and part of the initial design and development, or are they the “B support team”? This is a critical mistake made over and over again, by IT leadership and financial stewards, where it is deemed okay to bring in experienced consultants to design and implement a new technology solution and then leave it to the less experienced support to team to maintain and upgrade, without proper training and onsite support.

Lack of resources to provide an acceptable factor of safety

In the rush to curtail costs, the system suffers. The “secure and agile IT services” cloud is not a one off capital expense. Cutting operational costs too drastically will show its shortcomings in emergency outages and other incidents over time. As with any system, the change must be methodical with a factor of safety that is understood by all business partners. It’s no excuse to cut corners because there’s “no budget.” Try saying that to a surgeon.

Make sure someone is always accountable

In many cases, the business is cajoled into taking what IT says for granted, but when the system goes down they might be surprised to find out that no one is ultimately held accountable. “Virtualizing and hyperconverging its data center” also could end up virtualizing the accountability of the system, which in turn means that a Root Cause Analysis will never fully explain what really happened, if it ever gets sent out…


Lack of decoupled, identical Test environment

If your company cannot afford a decoupled test environment that mimics the cloud set up, it is adding risk to the implementation. The vendor should at least provide a comparable test environment to test bug fixes and service packs. If you had this and the outage still occurred, this points to the infrastructure team, their manager, their director, and ultimately their CIO.

Cognitive Bias toward “If it runs, don’t upgrade”

There can be a bias with some CTOs to only fix bugs with bug patches, and to never upgrade the virtualization system software unless the infrastructure requires it. In the end, “hyperconvergence” is a term that is meant for theoretical analysis, not ROI, because the hidden costs of implementing this new technology are everywhere, you just have to know where to look. Also, the risks for implementing an internal cloud are greater than going with the established, large cloud services.


Monday, April 25, 2016

ECM Maturity Markers

The Bell curve graph above is meant as a rough visual representation of the maturity of ECM. On the Y-Axis is ROI, on the X-Axis is Time. Some companies have more of a Long Tail graph. Some have a straight line that never achieves ROI.

Marker #1 (year 1)

Discovery and Roadmap Development

The business is interview, given the heads up that change is coming.

Driving Force

Sets the stage for how the whole rollout will progress
Ex: Finance vs. HIM: the first project usually gets the most attention, the onsite vendor team, the full budget request, the flagship presence.

Scoping

The project with the most bang for the buck is chosen and developed. The focus is on this project, regardless of the impact on other teams and systems.

ROI measure

It’s absolutely critical to get baseline measurements of the current processes. Measurements that highlight the FTEs involved with pushing paper, referencing multiple systems, etc.

First project Implementation

Usually the ECM vendor or associated consulting group installs, configures and deploys the first implementation with a lot of fanfare. It might even get mentioned by the CIO.

Turnover

Within a year, the initial consultant(s)/employee(s) will leave project or company. They are there to feed off the initial expense budget and leave to get on another budget train.

Marker #2 (year 1-5)

Execute Roadmap

This is when the ECM manager attempts to execute as much of the ECM Roadmap as possible. As the ROI is realized, there are second and third waves of getting the most out of the investment. Licensing costs be scrutinized. Upgrades are delayed.

Propagation

In some cases, the roadmap is rewritten to expand the initial vision of ECM, in others it is reduced. If the ROI goal proves hard to reach, more projects may get started to reach it.

Stop Gaps

ECM suites are capable of serving as stop gap solutions to many different areas. As other systems labor on with obsolete technology, ECM can sweep in and save the day. As older systems are replaced, the documents and images need a relatively inexpensive place to be stored for retention. This is perfect for ECM.

Integration

All possible integrations are added to the solution that stream line the indexing and processing of the incoming content.

Marker #3 (year 5-8)

Commodity

ECM is fully mature; its expansion is over. The original team is smaller. Most of the maintenance is routine.

Maintenance

The biggest issues are scaling for storage and performance. In large ECM systems this can be more and more of an issue. The original Roadmap usually doesn’t include plans for splitting up the repository that is now huge.

Cloud

At some point, a director will make the case to move the system to the cloud. Because it is huge, the argument to move it might be a good one.

Marker #4 (year 8 to XX)

Superseded by Better Technology

There will be a time when the shiny, new system becomes old and obsolete. The need to simplify and break apart the system becomes a necessity for survival . Technical advancements will become too glaring to ignore or workaround. Migration projects might start up to gradually dismantle the solution piece to move them into more modern systems.

Life Support


As new technology pressures increase, it may be an option to put the system on life support and left for retention only. 

Friday, April 8, 2016

9 Ways to Mitigate Risks of ECM Upgrades

We all know ECM upgrades can be challenging, we also know that in a perfect world there would be no hiccups. When does this ever happen? Some of the worse issues can explode when an upgrade is “supposed to have no impact” on other systems.

In Mike Bourn’s blog article, ECM System Production Deployments – What Could Go Wrong?, he describes the ways to mitigate the “busted weekend”. He describes the infrastructure and expertise requirements to mitigate risk. I’d like to add a few others:



1. Frequency There’s a reason that cloud services is supplanting onsite installation beyond cost: maintenance. It’s a way to have other do upgrades every year without the hassles of begging for resources. If you are still have your ECM system in-house, great! Keeping up with the version service pack 2 release upgrades is ideal for mitigating the various infrastructure and database risks.
2. Number of environments
The number of times you practice the steps involved with an upgrade is directly related to its success. 
3. Using Prod data
I can’t emphasize enough the importance of refreshing your environments with production data. The majority of content files don’t necessarily have to be copied. The issue is mainly around the potential issues with data quality and upgrade duration measurements.
4. Type of Documentation
Write documentation as if you have been up for 24 hours and are stress out. Create lists and take screenshots. Make believe you are dumb and need step by step instructions, don’t cut corners. Also, don’t blow off any install messages: these can blow up upgrades.
5. Reliance on outside services
It’s fine to have tech support on a support retainer for the cutover weekend, but I would try to do all of the actual pre work, testing, and troubleshoot. This is one of the best chances to learn more about your system, to gain an in-depth understanding beyond what the manuals or basic maintenance can offer.
6. Empowering the right individuals
Try to include as many individuals in on the upgrade procedures. Distribute the troubleshooting tasks among them. 
7. Testing Efficiently
Regression testing can get so bogged down in the weeds that larger issues are missed entirely. Try to design a multifaceted strategy that balances the number of tests with the likelihood that it would be affected by the upgrade.
8. Diversity of User’s OS and Browsers
Please don’t forget that the User base is not like you. Their PCs may not have been included in the last round of OS upgrades or patches. Their web browsers or java versions may be configured with different options. Their PDF viewer may be old. 
9. Ready to Go
If you have a multiple person team, keep one home and rested, ready to come in on the following day to deal with the post upgrade issues because there will always be some.

There are infinity ways an upgrade weekend can go bust. By drilling into each of the above areas, you should mitigate many of potential hiccups. If you are lucky, the issues will be minimal. You will be able to rest on Sunday before going back in for more on Monday. 

Monday, March 21, 2016

Invaluable Individual Contributors

"These people are the highly professional individual contributors.  In many cases they have deliberately chosen not to pursue a managerial career, preferring technical work or wanting to avoid the duties associated with being a manager, including budgets, reports, endless meetings and the never-ending people issues... Nearly everything they accomplish they do through influence, because they usually lack any formal 'role power'.” Jack Zenger, Forbes *
We all know the individual contributors in our departments or organization who are invaluable. That's the problem, when the leave or retire, they will take a huge amount of knowledge with them. 
Now is the time to shadow them, to write everything down, fully understand how their mind works, how they troubleshoot. If you don't, you might as well budget for 2-3 more positions to compensate. Plus, that great service level agreement that worked great? Forget about it.
This individual is more than herself, her connections and the accumulated trust they have in her, is part of the whole position she held. The invisible dotted lines to her need to be understood.

Documentation standards

Ok, your organization has good documentation standards, but what about the assumed or taken for granted activities that are done to get things done. How do you document respect and trust? 

The underlying climate

At the doer level there are always gripes people have with the process of getting work done. What are these issues? If you have invaluable workers, then you have issues with this process.

Mandatory shadowing / agile techniques

You could cross train your team, shadow the invaluable with the newbies. This gets you part of the way there. This does not negate the need for formal training. Each indidual will still thrive at what they do best, not what/how was done previously.

Leaving a void

The invaluable compensate for broken processes by frequently "saving" the day. They mask any issues with project management by patching the issues as soon as they occur. They serve as backup when other folks can't figure out what to do. They are victims of there own success in that they enable a less structured approach to documenting requirements, specifications, schedules, and processing. And when they leave, good luck filling their void.

Monday, March 14, 2016

Browsers Ride the Upgrade Wave

Many times after implementing an ECM solution, or upgrading, Users come out of the woodwork and complain about issues using the system. Wait, the testing phase took a month and was meticulous. Did you fully inventory the User base? Did the Windows group really give you all of the possible Web browser apps and versions which access the site?

Detection Tools

Webserver Logs

Analyzing the current web server logs should reveal the web browser spectrum. For a certain amount of time before analysis, the logging configuration may have to be changed to increase the details.

Traffic sniffers

Wireshark or Fiddler can be used to detect http traffic into the web server. Details of web browsers can be gleaned from these logs.

Full disclosure beyond CYA

There are always pockets of non compliance in your organization. Even if you have identified them, they may push back with reasons why they can't upgrade their web browser, usually because they work with a system that needs to be upgraded as well.

Upgrade Wave

Chances are good that your IT review board does not orchestrate all system upgrades to web browser type and version. They typically pay attention to the most expensive and complex, leaving the ancillary applications to fend for themselves. Wouldn't it make sense for all applications to be orchestrated at the User level first? That is to list out all systems based on browser compatibility?

Before Interoperability

Of course, one of the key aspirations of "interoperability" of large systems is mapping and synchronization data, however, this can't happen without seamless coordination of web browser types and version to assure User access to the information.

Thursday, March 3, 2016

Using BPM to Assure Information Quality

I know, there is no such thing as 100% quality, but we need to challenge ourselves to get there. To catch potential data quality issues, you’ll need to create a set of validations, added to a process that will identify and fix them.  This set of validations can be automated within a workflow, identifying rules and actions to perform given certain conditions. Finding and fixing issues is an ongoing task: it requires a balance of vigilance and curiosity, as well as caring. Usually issues come about because there is spotty accountability somewhere in the flow of information from the source to the downstream systems.

Goal

Close any data integrity gaps by applying validation checks and fixes.

How does this happen?

It can happen very gradually and surreptitiously. Within a company, unless there’s a strong information quality department, there will inevitably be data inconsistencies because each department has different priorities and validation requirements. All it takes is one form in one application with lax data input requirements. It could also be a lack of validation check during data input. Lastly, when software is upgraded or data is merged from one system to another, we assume wrongly that one source data is fully vetted.

Examples of how this happens

Downtime

Let’s say your system has an account lookup feature, but that feature is down, so you have to enter in the account information manually. The feature is fixed in a few hours, but by then you’ve entered 100 accounts. Does this data get validated later? If it doesn’t, a downstream application could have quality issues.

Patient information merges and updates

Lets’ say you work at a hospital. There’s a patient referral with the same medical record name (MRN) as an existing patient with the same birth date. The referral is entered in at the existing patient. This error is caught later and the patient information is fixed, but did the patient already get treated?

Towards Quality with Process Automation

By inserting workflows into the process, specific types of data inconsistencies can be identified, investigated and resolved. Below are some general design components for building a quality validation workflow:
·         Figure out how to funnel all data/content through the validation workflow. Using by doc type or input sources, the information can be collected and filtered as appropriate.

·         Create the rules to route issues into buckets. Here are some typical queues:
o   Routing: this queue has validation checks which compare metadata values against source of record values.
o   Issues queues: these correspond to the common issues that get identified.
o   Routing issues: this queue holds any doc that doesn’t match the issues queues.


·         These buckets can be evaluated during their initial manual fixes for potential automated solution, and identifying upstream, root cause, data issues.