Tuesday, September 20, 2016
Who would have thunk that these two ECM giants would be under the same roof? OpenText is obviously paying a premium for Documentum for its clients and solution services. The dashboard level of this deal must look excellent, however the details of the next 5 years of execution will be interesting. The media’s favorite go to pundit for reactions to Documentum events, John Newton (cofounder of Documentum), says “the product lines will wither away” amid neglect from OpenText in favor of their own ECM solutions. This sounds about right, but maybe OpenText has higher goals beyond the software and solutions, toward the content and process.
Some folks characterize Documentum’s customers as being loyal. I don’t buy it. Most of the original proponents of Documentum in these companies have moved on to different positions. This is legacy software now in many large companies who’ve had it installed for over a decade. It’s a matter of risk analysis now, not loyalty. Point or more integrated solutions are stepping in and sweeping away the new projects.
Of course, Documentum has its tentacles more into certain industries than others. For example, pharmaceutical companies have relied on custom solutions from Documentum for a while now. With 21CRF regulations controlling content, the move to any other solution will be extremely painful. On the other hand, OpenText has its share of pharma clients as well. Maybe the strategy is to hone in on the huge money maker industries only and dump the rest?
I’m not sure either platform will supersede the other. For years EMC threw in Documentum as a bonus to its storage solutions. What does this mean? Well, at the metadata architecture and storage level it means that not only OOTB applications, but custom apps and integrations are dependent on the storage addresses. This will be no small feat to migrate to OpenText. I believe Documentum was slowly sinking into EMC’s storage hole anyways. My experience migrating AppExtender files to OnBase is a good example: the content was in blobs with links for Cerner to them. We had to leave the links in place and move the underlying content pointers. We lucked out with this approach, but it left me wondering how other companies deal with a more comprehensive migration.
I have seen multi-tiered, multi-year upgrade and migration plan for large installs of Documentum. The scale of internal Documentum knowledge needed to pull this off, year after year, is daunting. To ask these resources to then switch to a different ECM solution requires years of preparation and training. I worked for EMC as they were implementing Documentum solutions across their enterprise and it was painful even though EMC owned them!
Having worked on both Documentum and OpenText solutions, I can tell you that both of these platforms are difficult to upgrade for different reasons:
OpenText patches and steps and bugs
At a pharma company, I worked on upgrading OpenText for a year and the project was postponed because our upgrade process was not complete and it still took too long to fit into the downtime outage window. This type of story sometimes happens with all large software installs, however the amount of kludgy patches that had to be applied during the upgrade was disconcerting. The steps of the upgrade for both the code base as well as the database changes were way more complicated than other packages. This software upgrade process can make you understand why some CTOs do not advocate upgrading anything unless it is faltering.
Documentum customizations and changes
Most Documentum installs have some or extensive customizations based on the WDK and DFC development. These APIs were excellent at the time. Over time, methods get deprecated and maintenance gets more expensive. Then, after a few years, you have upgrade. This can be a daunting process, even when it’s you who knows the software inside and out. Finding documentation which is specific to the development and changes is probably buried inside the code. This means you have to understand the older and newer APIs. Adding a migration to OpenText on top of this is a nightmare.
Coming a full circle on understanding what is most important in the ECM game, maybe the unstructured and structured content itself (and mining it) is the most valuable asset after all. In the next decade, we’ll see.
Tuesday, September 13, 2016
But, aren't most Big Data solutions masking poor information management of the past? Is it really trying to help create better tools and quality control for potential negligent information processing?
In CIO (March 16, 2016) “Is Enterprise content management becoming obsolete and irrelevant?” article, Mitch De Felice builds on the common lament of AI and Big Data marketing that most data is unstructured. Unstructured by whose definition? I’ve seem search algorithms be applied to very structure databases to mine for gold. Some algorithms are benevolent, others are malignant. Some search for patterns and breakthroughs, others for revenue opportunities.
Mitch says, “ECM vendors need to shift their view from data storage to knowledge management.” This has been happening for 30 years. It’s not easy to squeeze knowledge out of “just good enough” data entry. ECM vendors have from day one been offering workflow, indexing, relationships between content and metadata, etc. It’s more fruitful to why “knowledge workers” continue to drag their feet and purchase only the bare minimum of ECM modules.
Inevitably, multiple waves of information management process overhaul will happen. These waves will force much more structure underneath the software tools. Big Data is the option now because most companies are stuck with poor information QC on top of applications that are expensive and difficult to change quickly, let alone retraining Users to provide more knowledge.