Wednesday, December 22, 2010

Fear and Loathing in Content Management

You know when an atmosphere of fear and loathing has a grip on an ECM team when no content gets deleted, all customization is outsourced, communication is restricted to your immediate manager, and there are no formal business requirements. Boiko’s “Laughing at the CIO” comes to mind. It could be my present to the CIO this year…

Nothing Gets Deleted
“Oh no, we can’t delete anything. The boss is afraid to delete anything, he came from quality and we really wouldn’t know what to delete anyway.” Have you heard this before? Are there too many issues to tackle before even thinking about records management and retention schedules? Fear of unknown reprisals should a vital record be deleted is very real if there are no principles and understood requirements around the disposition of them.

All Customization is Outsourced
This is a cover your ass maneuver under the guise of saving money. If you fear being responsible for actually understanding and building solutions to company specific issues like integrations and metadata management, why not outsource. The long-term effects of this are a skeleton crew of implementation and support workers who are board and loath the lack of process and career potential. When most technical design decisions are made outside of the ECM group, there’s usually one manager who knows most of the details and hoards the knowledge for fear of losing his job. His employees loath his shortsighted design decisions.

Communication Restrictions
“There’s a certain protocol here. If you have an idea on how to improve something, you talk to your direct boss, who talks her direct boss, who talks to his direct boss, and so on.” This is a symptom of fear of being made obsolete by contributors who are smarter than you are. Ideas and complaints, both should be shared freely. Communication is tipping in the desert at best.

Business Requirements
Are there requirements for the ECM system? Have they been updated? Have the business users been doing whatever they have wanted without any standards for metadata, templates, workflows, etc.? You’d be surprise how many ECM systems are still dumping grounds for whatever the business want to through in there. If users are complaining about performance and search then chances are very good that users should be self loathing because they’d brought these issues on themselves.

Friday, December 17, 2010

An Approach to Classification Change

Ease Change to Assure Adoption
Approach using the new classification on the homepage of the portal or ECM system an option rather than a mandate, that is, provide the old homepage in parallel with the new look and let the Users decide which is better. Users will either adopt it or not. Either way you’ve reduce the risks of a wholesale change. Chances are good that the new classification will be faster and more amenable to the way User’s think about the company’s information.

Align Classification Schemes and Labels
Research all of the classification schemes in your company and attempt to conform them to a matrix where comparisons can be made. Look at common values. Look at classification labels which repeat themselves. Weigh the priority of following the lead of initiatives that have momentum, for example, if SharePoint is being adopted as a platform, look at how the tabs are labeled and try to conform to those. The goal is to work together for a common label structure for many reasons beyond the portal or ECM system. The goal is to create a reference for integration of search, records management, security, etc. This is one way to slowly achieve continuity of classification.

Mapped Metadata to Metadata Repository
In large organizations there are many reports, spreadsheets, databases, websites, etc., which proliferate different ways of describing content and information. These cause confusion and turf wars among groups responsible for applications. The integration groups are caught in-between trying to map one value to another in attempts to patch and process content flows. Mapping metadata is a stop gap approach, but does not deal with the larger issues of working toward a central metadata repository.

Taking the Long-Term Approach
  • Design and develop an enterprise metadata and classification model based on industry standards and integration requirements.
  • Pick the project that has the most traction and momentum and make sure they are classifying and describing their data according to the enterprise model.
  • Work in the adoption and changes to other applications as the latest "killer app" is maturing.
  • Integrate all applications at the metadata and classification levels, assuring bi-directional change interfaces.


Thursday, December 16, 2010

Records Management comparison of Sharepoint vs. Opentext vs. Documentum

Here's a rough records management functional and design comparison matrix based on Sharepoint 2010's offering out-of-the-box:



Open Text Content Server

Documentum 6.x

Document IDs

New in 2010

Docid in dversdata

r_object_id, i_chronicle_id

Document Sets

Tab in UI, custom page, properties of set and content list

Folders, Projects, Collections, Virtual Folders, Reports,
Web Reports, Custom View

Folders, Change Sets, and WDK customizations
Virtuals Docs

Auto Tag hierarchy

Library >
Content Types

Folder >
RM Classifications

Folders >
Object types


Configurable, multi lingual, terms easily changed

Classifications and categories/metadata

Categories and translations

Content Organizer

A “transparent process” which routes content based on
attribute values

Folder Provisioning

RM Classifications

Customization or DCM or Taskspace
or xCP solution
Smart Folders
File Plan in RM

Tag doc properties during creation

Transparent while writing in MSWord, term suggestions

Picks up basic MS Word properties

Picks up basic MS Word properties

In place Records Management

Declare from list

Declare from properties on standard UI

Declare from attributes or dropdown on standard UI

File Plan

Site level, content number, policy, content types

RM Classification on folders, categories, and content

Configuration in RPS or RM

Multi Stage Policy

Review Cycles
Apply retention to folders (libraries?) or content types

Record Series Identifiers are configured to handle actions
by certain criteria in doc properties

Lifecycle actions, folder inheritance, content type, TBO/SBO,

Applying Holds

Search and apply
In place or move

Part of RM module



Clustered, sticky session

Session is lost when a node goes down

Clustered, sticky session

Wednesday, December 1, 2010

Groupon and How We Work

I’m going to take Daniel Lyons’s Newsweek article, “Click and Save” which is an introduction to Groupon to the next logical level. Groupon has subscribers who sign up for a service or merchandise and save by the scale of how many subscribers are interested and the deal that the merchandiser makes. This type of give and take can be applied as new way to doing work within IT.

So let’s apply this type of methodology to our “agile” or “waterfall” projects in a corporation. The project would start out as an idea or pain point which would be described by the person who has a budget and needs work done. Its details would feed into a sourcing cycle which matches the pain point with subscribers who know how to build/deliver the solution. There would have to be a deal making component which could resolve the processes around the sponsor deciding on who should do the work, the scope, for how much money and time, and for the subscriber/vendor to answer the requests and seal the deal.

So you ask, where’s the manager in this process? There would be minimal need. The project sourcing app is directly focused on the needs of the business and the solutions of the vendors or in-house developers. There’s a certain amount of critical mass that needs to be involved with the whole application for it to work, but this is the portfolio distribution process directly connecting the dots -- no middle person influencing decisions, playing politics, back stabbing, protecting their jobs, etc. Man, I’m not jaded am I?

So Gartner where does this fit into your ECM quadrants? Transactional content management? Could be. There’s a real need within companies to streamline the whole process of sorting out content creation and building applications that resource and process them. Content Infrastructure? Maybe. Portfolio management built into workflows that push knowledge from the creative brains to the consumers. We’ve heard all of this before, but what is new is that a small company like Groupon can blend a need for efficiency with the collective motivation of the crowd cutting away the profits of the middlemen. If this could be applied to the many issues of why IT projects fail wouldn’t that be a step forward?

Friday, November 19, 2010

The IT Cycle of Optimism and Hubris

Once you have reach the twenty year mark in your IT career, look back, and chances are likely that you’ve experienced a few cycles of optimism from your leadership regarding using the latest software tools to become more competitive.

However, there’s a big difference when the optimism comes from the CEO vs. CIO. If it’s coming from the top it’s in the form of high level goals which are translated “logically” down to the tools to implement them. If the optimism comes from the CIO, the chances of it working are much lower and that’s because the CIO is throwing tools at a systemic problem that will eased a bit but not solved by them.

We’re at least at the second IT cycle of hype with “Knowledge Management” and Enterprise 2.0. As these cycles of optimism hit wide-eyed leaders who were not present during the last cycle enthusiasm breaks out on the presentation circuit with promises of curing the culture gap of knowledge sharing within companies.

Will Sharepoint solve the communication issues between different groups during the lifecycle of ideas, memes, and products? Did eRoom solve the issues in the late 90’s? Did ECM suites try for the past 10 years to make it easy for groups to share information? Wasn’t email or isn’t twitter going to help us? The point is that we need to go through these cycles to fail and get better at automating certain menial aspects of processing the information in our heads. The governance of engagement will prove to be a challenge with any attempts to fully electronically explore sharing of ideas within a company.

Dr. SIDDHARTHA MUKHERJEE talking about optimistic cancer “cures” that prove to a momentarily hubris in a long battle to finding cures:
“They said well, how can we possibly run a trial on something that we know has got to work? This story carries the memory of the kind of optimism that very quickly tips into hubris, which is so much part of the story of cancer.”

Tom Davenport, blogging a few years ago on Ent 2.0:
”I admit to a mild hostility to the hype around Enterprise 2.0 in the past. I have reacted in a curmudgeonly fashion to what smelled like old wine in new bottles. But I realized after hearing Andy talk that he was an ally, not a competitor. If E2.0 can give KM a mid-life kicker, so much the better. If a new set of technologies can bring about a knowledge-sharing culture, more power to them. Knowledge management was getting a little tired anyway.”

David Weinberger on Davenport’s 2007 blog above:
“But it's reasonable to think that the technology, when taken up and used, will affect enterprises directly and indirectly…”

Friday, November 5, 2010

Flip this ECM Stack

Mike Alsup’s SharePoint and Records Management presentation got me thinking why not take the traditional ECM application stack and flip it upside down. I don't mean to be flip, but it's time to morph ECM into another set of solutions altogether. You’d have storage, records management, and archiving control through rules on top. Now think of these applications as goals, requirements and metadata which make a blueprint for the enterprise. The principle of this flip would be to drive consistency and efficiencies down through the enterprise of applications based on rules.

There’s no such thing as an enterprise application, an enterprise of apps is governed by new mechanisms like Boiko’s entities which have metadata repositories, identity, and apps which run models of rules, all of which are mutually exclusive, yet related and agile. These islands of entities are related to each other and connected by services at varying levels of complexity. CMIS is focused on inoperability, but is an add-on standard to the existing stacks of ECM. It’s time to unwind the stacks even more and create flexible models for governance, retention schedules, rules like 21 CFR part 11, etc.

Tuesday, November 2, 2010

ARMA and Enterprise 2.0

Ron Miller is right on when he’s reporting on the ARMA and Enterprise 2.0 shows and how their relationship to each other is getting closer. I agree with most of his analysis, but as far as the responsibility and control of this relationship happening at the content management application level I disagree. ECM vendors have been trying to reign in this relationship for a number of years now and do not seem to the have the agility or will to invest in the fast moving Ent. 2.0 software realm.

Take, for example, Documentum’s records management offering. Their federated records management (FRM) solution is interesting and heading in the right direction, but falls short of commitment from EMC. The technology behind the solution was created by a third party. Tell me any storage management company that owns an ECM software suite has a strong records management solution which is backed by top quality professional services? It is in the company’s interest to provide half hearted attempted at solving one of the biggest issues that legal and information management teams have: how to find content and get rid of it in a rules-base, systematic way. Do oil companies really want to provide alternative energy solutions?

Thursday, October 21, 2010

ECM: The Perfect Storm for Business and IT

The blizzard of ’78 conjures feelings of helplessness and anticipation of the thaw. Everyone knew it was coming and yet they went to work anyways which meant lots of folks were caught on highways and airports. The lack of communication as to the severity of the storm was obvious in hindsight. Likewise, with ECM, the lack of governance and communication usually culminates into a perfect storm of control and blame between the business and IT.

With so much marketing information on “best practice” solutions to legal holds, records management, learning management, business process management, etc., it’s easy for individual business units to fall in love with a solution before IT even knows about it. Good governance would never allow this wild west atmosphere, but growing companies are usually governance challenged. A chronic lack of IT resources over time creates easy ways for business units to push their own tools to innovate in areas the IT does not “understand”.

All this back and forth with whining on both the business and IT sides leaves an easy scapegoat: the ECM solution. It becomes easy for the business to exclaim, “The UI is so 1990’s”, or “I can’t find anything”, or “We don’t know what to delete so we keep everything”. Then IT chimes in, “It will take us over a year to upgrade!”, and “Why can’t we just use SharePoint, it’s free!”, or “We don’t have any functional specs, so what does the business expect?”.

The middle ground where the requirements meet software functionality and architecture is where the weak link sometimes is. This link is where the projects are vetted, resourced, and funded. The problem is that this middle ground is stuck between being not too business experienced and not too technical. The middle ground is constantly changing, but can be called portfolio management or business relations. If governance is not at the enterprise level for content management, then project priorities and resources are constantly stressed and projects fail.

Storm Formation

Sales professionals with hidden agendas sell the business on slick demos and solution X.0 capabilities. The business goes to conferences and comes back psyched to use the latest tools, only to find the dull, boring ECM solution in place. The business writes up requirements and gives them to their portfolio manager. The project doesn’t get approved. A year later the business pleads with their boss’s boss. The boss’s boss talks to upper management in IT and a pilot project is born.

History vs. Change
ECM Directors have with hidden agendas too. They have relationships with vendors, they know the software functionality. They are comfortable being experts with the system. They want to ride it out as long as possible. Change is too risky and fraught with functional gaps which the business won’t like, but meanwhile the business is secretly hoping for a new system. The license agreement has not expired and the cost to migrate is huge. There are always many reasons not to change if your vision is one to two years out.

Storm Aftermath
The period after the big storm affords some down time; hindsight to revisit any failures or damages sustained. Standards and rules have a chance to be implemented because there should be opportunists who understand architecture and governance who can exclaim, “I have a way out of this mess”. A new CIO is usually hired and new models are sold to IT. The new models promote ties with requirements and functions, budgets and accomplishments, and business to IT communication. The ECM system will most likely morph into something that is more suitable to the requirements, maybe a new application server (Sharepoint) or focusing the “E” of ECM to a system which archives and manages retention.

Tuesday, October 12, 2010

Open Text 2010 Sets Up Defensive Services

Open Text’s marketing of their new 2010 ECM suite and share services points to four distinct areas of focus: Lifecycle, Process/Transaction, Engagement, and Shared Services. Looking at these stacks, it is clear that OT’s has set up them up to deal SharePoint’s strengths and weaknesses. The Library services layer would have been revolutionary 10 years ago, now it’s just another ECM suite trying to be the archive and retention layer in an enterprise.

Also the initial release of ECM 2010 is for Windows 2008 R2 64bit until the 32bit is available. If this isn’t a concession to Microsoft, I’m not sure what is. Oracle gets a boost too with the 11g requirement.

“Management content and metadata in a consistent manner” is such a cliche at this point.

Lifecycle (“strong” hold): these are all set to be broken out as services.

  • Document Management – lots of potential here, but each of these need to be services…
  • Records Management – classification layer on top of document classes. One of the biggest potential services that could be developed, but currently is very embedded in the core CS product.
  • Rights Management – a service which is used to encrypt/decrypt content.
  • Digital Asset Management – “I want you tube”
  • Archiving – this needed to be highlighted a long time ago. Flexible storage allocation was huge before EMC bought Documentum. Now everyone is following their lead.

Process/Transaction (decoupled service from content server for enterprise implementation)

  • Business Process Management – separated from the core content server, this service is offered as a single instance which will integrate with many other repository services, including the content server.
  • Capture and Imaging – This is separated out as well with connectors to repositories.

Engagement (up for grabs, here you go SharePoint, please don’t take anything more…)

  • Collaboration: is this a joke?
  • Social Media: right?
  • Web Content Management: hmm, I don’t think so.
  • Rich Media Management: If I have too.
  • Mobility: get in line.

Shared Services

  • Library Services (too little, too late)
    ECM suite, SAP, SharePoint, File system, Email system
  • Enterprise Process Services – BPM above

So what would be a better alignment of services to appeal to experienced and frustrated folks? How about tossing out the authoring and collaborating side until Microsoft succumbs to Open Source formats and focusing on tagging, classifying, and storing information. In other words, focus on where the current pain points are which are not at the document level anymore, they are at the information architecture level and SOA projects are all custom after the designing part is done. Hmm, maybe Open Text can open itself up to services that really help information get organized instead of trying to compete with Microsoft in collaboration or publishing to website…

Tuesday, October 5, 2010

What happens when ECM is renamed to DMS?

When IT governance, steering committee, or strategy team renames a whole IT group from ECM to DMS (document management system) it is no small change. This change has many ramifications: changes to application software, data architecture, integrations, requirements, functionality, etc. A divide and conquer rationalization is presented. A “let’s look at all requirements from the stand point of: do we really need this?” Ok, but it’s not that easy.

What is lost?
Talent: it takes time to dismantle an ECM system into to smaller, more focused parts and personnel with attention deficit disorder will get antsy and want to jump ship.
Sense of cohesiveness in the lifecycle of content creation through disposition: application silos will start to appear.
Unlimited playground for the business to experiment in: the business will have more work to do to figure out what they really want to do and get measured by it.
Central administration of users and groups, access control, attributes, taxonomy, retention schedules, search, backup.
Source of record: How do you keep track of and archive content that need is mission critical and could be audited many years hence?

What is gained?
Broader understanding of the content lifecycle: because of more integration requirements, the business will have to know exactly where their content lives and is dispose of.
Divide and conquer mentality: separate groups tackling the tough business communication issues.
Better governance: now the strategy of cohesiveness of all the desperate applications will
Better data architecture
Better standards definitions and adherence: assuming there are strong governance and data architecture, standards should thrive and push integrations to new heights.
A distributed system which reduces the “too big to fail” issues of a centralized repository system.

Recycling the Same Issues
To what extent are we recycling the same content architecture, management, and publishing issues? Collaboration issues will be deferred to SharePoint, workflow will be spread among all of the applications, archiving will further muddled, group access control will be splintered further, regulation paper work will increase by the new amount of applications.

Content and Information Architecture Design
I have hope that this will be the golden age of the content architecture frameworks, that we will witness true policies which are governed by blueprints which can withstand the issues of silos of information, de-normalized data, historical metadata and the like. The framework would have to be malleable, yet strong. It would have to be clear and forward thinking to lay the ground work for all content and information evolution to come. Ok, I can dream can’t I?

Friday, September 17, 2010

Embedded (shadow) IT: Where are you?

If you have ever had a position in IT where there was a “dotted” line in your responsibility to another group, you’ve worked for embedded IT. If you have developed an application on your desktop to support your team, you have been embedded IT.

CIO’s always want to centralize most of embedded IT to achieve the cost savings of common services, infrastructure, and architecture. I’ve been embedded and centralized, both of which have their faults.

Here are some aspects of embedded IT that could be potentially missing from a centralized model:

Well understood business requirements: ability to read between the lines
Trust in your teammate’s ability to deliver
Efficiency of work and play is a given
Hard work is observed and appreciated by the business
Governance is organized and strong: no dotted lines needed
Content is fully described to suit the project at hand
Sneaker net workflow works well

Here are some aspects of centralized IT that could be potentially missing from an embedded model:

Automated workflow
Consistent metadata and values which describe content
Governance which is tuned into the goals of IT as a whole
Consolidation and sharing of duplicated services
Lack of trust in your teammate’s ability to deliver on a project due to being spread thin
Business requirements that are read literally

Monday, September 13, 2010

Creative Destruction of SharePoint

SharePoint is trying to transform, or split up ECM into pieces, similar to what Schumpeter referred to as “creative destruction”. Schumpeter’s use of this term implies that in order to have innovation the current paradigm has to be superseded.

What aspects of ECM will be crushed in order for SharePoint to innovate? Will ECM actually be destroyed? I have doubts. The main “innovations” of SharePoint over ECM software are based on shortcomings of the software suites, not the solutions or platforms. SharePoint is not as much innovation as it is a fix to chronic ECM problems:
  • A more unified architecture, as opposed to the patch work of typical ECM suites
  • Easier to configure and build sites from templates (unless you have to customize, which is most of the time)
  • Ease of Use: by definition SharePoint is easier to learn and use, unless your thousands of users already use another software suite.
  • Already in the house as an OS and email system, thus foundationally integrated, which is the key business driver for Microsoft-based systems.
  • Less expensive, in the short run, but the long run expenses will depend upon reducing the need for customizations and traditional library management services.

ECM suites will have to develop and innovate on their strengths, which in some cases are slowness to change with the times. Most small mistakes made in implementing content management systems were made on a continuing basis for cost and lack of direction reasons, not because of the obvious flaws in the software. The market of ECM has created the dinosaurs of ECM suites, not the other way around. All large enterprises, which own the largest amount of licenses, can not and will not move at social media speeds.

Social media has many more iterations to go through in order to develop the depth of ECM requirements. SharePoint will run in parallel of the ECM suites for some time to come, maybe nibbling off bits of functionality and previous integrations. In the mean time, ECM will hopefully rewrite their core software to be more open source and model their applications and added value on innovations rather than bandaids.

Litigation Response as Wedge in IT Issues

Let’s say you have litigation response that the legal department has mandated. It becomes clear that the overall cost to outsource the discovery of self-organized content repositories with limited metadata and historical changes in metadata values, let alone file shares and email, is staggering.

The Legal Hold project revealed issues in many other areas of the company, namely that there was no central data management architecture, records management was not automated enough, and central governance of the embedded IT groups was ineffectual.

In this case, the Early Case Assessment was controlled by outsourced personnel instead of an automated tool. The mandate acted as a wedge that opened up the heart of the issues that almost every enterprise has to some degree: namely the lack of coherent standards which span all applications and are well understood and governed.

Tuesday, August 31, 2010

Floodlight vs. Spotlight in eDiscovery Solutions

I had the privilege to be entertained and enlightened by a few recent demonstrations by leading eDiscovery/Legal Hold software vendors. After listening and looking at smoke and mirrors, each vendor revealed that their core competency still remains their strength and eDiscovery is an extension or add-on rather than a full fledged solution. The “floodlight” approach of these vendors to eDiscovery was not achievable given the current state of the offerings.

A true eDiscovery/Legal Hold vendor has yet to emerge. The add-on vendors flooded the market by buying the pure plays, but couldn’t have imagined the difficulties of integration that these add-ons have created. The issues behind efficient eDiscovery are squarely in the business’s camp and not in IT. Not one of the vendors mentioned the integral importance of common and historical metadata management. If the doc is not described well enough, good luck discovering it. If Record Management is not fully functionally incorporated into the business daily routine, docs will not be disposed of according to compliance mandates.

Search Vendor
Core: Index and Search Results
Platform: Most
Hold: Export out of repository to file system
Vision: Nice long-term story if affordable
Agent: stealth/regular desktop agents
Connectors: Most applications with varying levels of customization needed
Index: Best in class, but dependent on the source content's metadata accuracy
Collection: Kind of fuzzy given that the exports from repository are done without built in abilities to validate source content accuracy, ie. ways to test discrepancies.

Forensic Vendor
Core: chain of custody of collection
Platform: Windows
Hold: Office docs on file system by using extra control files for tracking
Vision: Deal with Windows file system content
Agent: no client agent
Connectors: Windows and SQL
Index: Dependent on other’s indexing
Collection: Searching and collecting are secondary to browsing and selection.

ECM Vendor
Core: Workflow, doc lifecycle, storage management
Platform: Most
Hold: Export out of repository to file system with xml metadata file. This potential removes the source context, auditing, and access control information from exported content. What about a hold inside the repository?
Vision: Partner with eDiscovery solutions and let them deal with the issues.
Agent: stealth/regular desktop agents
Connectors: Many, but disappointing connector to own system
Index: Does not compare to robust search engine eDiscovery product
Collection: Built-in export is marginal out-of-the-box and third party tools need customization for legal department export and metadata requirements.

When will eDiscovery solution providers design the foundation of their offerings around metadata/records management prerequisites and controlling the chain of custody? Of course this is not easy for any enterprise to achieve, but focusing a “spotlight” on specific areas and building the solution each area at a time will get this done. This “spotlight” approach will force the business unit that is most motivated to solve their organizational issues around RM and metadata. The policies and procedures can be created after the first spotlight is finished. Subsequent spotlights will be dependent on the quality of governance and policies in place.

The spotlight approach would show the strength of pure play solutions over floodlight enterprise solutions. The forensic vendor would shine when testing Windows file system content only which would be a bad start. The holy grail of all of these solutions is the ability to introduce a layer (like RM solutions) where the intelligence gained by indexing, searching, and collecting could be written back on the content for better future classifications and entrance to the semantic qualities of future systems.

Saturday, July 10, 2010

Single Point of Failure

I’m going philosophical for a moment here. Why do we have a technical systems oriented concept of “single point of failure”? Aren’t we all “single points of failure” and the recovery from our absence is proportional to the much we shared of our experiences? The risk of losing a connection, process, services, invocation, etc is proportional to how much we know about system’s interactions and what their limits are.

Redundancy is nothing but an attempt to backfill a single point of failure. Redundancy is really just a double point of failure if you think about it. How many points of redundancy is really risk free? If you have one hot backup system to failover to, then what do you have if that fails? It’s better than nothing, but not fail safe. Enough of this sophomoric banter, let’s try to figure out a list of single points to check and test.

Any call for failover and redundancy is mandated by a Service Level Agreement between the business and IT. This agreement has flexibility factors built into it and is most likely an extension of a more comprehensive regulatory or industry standard. It usually happens when an old policy didn’t work according to plan and the C’s freaked out.

Can you perform an in-place upgrade a system without downtime? If not, what services in the application stack are not redundant enough? Here’s a common list of systems and issues to consider:

Application Servers
Should be clustered for failover, however do sessions really failover gracefully or do Users get errors during the failure?

Database Servers
Should be setup with a hot failover like an Oracle RAC system. This would allow for backup recovery. Also, as storage expensive as transaction logs are some cases these could be a life saver.

Service accounts: Local vs. Domain
I understand that applications need to be secured between services and file systems, but to only allow domain accounts for an application sets up a potential for one account to run services that could all fail if there is an account lockout of that one domain account, or Active Directory. Local admin accounts serve the same purpose and in most cases can be substituted for the domain account.

Storage Backup/Restore
Storage itself is relatively straight forward; backing it up and restoring it is a whole different story. Backups are usually done on a rolling basis: full on the weekend, incremental on the weekends. But, what happens when a restore from tape is needed? The normal backups get delayed because the same system is used for both. Do you know how long it takes to restore from the backup? Are the backups redundant? How close is the backup machine to the application? Is there a whole DR off sight solution?

Wednesday, July 7, 2010

Free Export/Import Tool Registered tables, groups, ACLs, and Users (for DCTM)

When it comes time to designing the community, information, and structures of the repository, these Java tools will come in handy. They facilitate deployment between environments. They also allow for editing in Excel and porting back into the repository.

Wednesday, June 23, 2010

Forcing Structure by Leveraging a Crisis

Leveraging a crisis happens all the time in politics and the financial industry. Why not leverage a crisis for the sake of organizing content? Let's face it unstructured content will remain as such unless a fire is lit under the businesses responsible for making the mess in the first place. Here are five possible avenues to pursue if the opportunity arises. The idea here is to build a foundation of detail and context, the "who, what, and where" of Zachman's framework. This will provide a foundation to then apply certain tools to go to the next levels of acting on the content in ways that structure, permission, and dispose.

Let's say there's a lawsuit against your company and the legal department is breathing down IT's neck to find, collect, and produce information and content pertinent to the case. You're not too happy with the 3rd party vendor who is helping collect the information. The business is somewhat removed from any searching activities. Most of the burden is on IT. But why? Who created this chaos call "content management"? What you could do is use the litigation request as a method to push back detailing content and context to the business under the guise of needing this information for legal counsel.

When cost cutting is running it's course through your company, use this to apply a cost charge structure where the business units bear the burden of paying for all aspects of IT. For example, if you inventory your ECM user groups, I bet you'll find a few groups which use larger percentages of the system in terms of licenses and storage capacity. When these groups feel the pain of actually paying out of pocket for IT's services and storage, they will quickly want to figure out what the records management policies are and dispose of content. They will also want to structure their content to better understand what they have and who owns it.

If there is a breach in the ECM system, unauthorized access to sensitive information, the business will want to clamp down on access control. This will prompt the business units to figure out what they have and who can see it. It would be a good time to apply metadata and better folder structure to the content as they are applying ACLs.

IT Capabilities: Limits of DR and SLA
The next time a system goes down because of backup time during an upgrade, or recovering from a flooded data center, the business will be tick off, but receptive. This could be an opportunity to mandate standards of ECM organization and use.

If the work requests of the ECM system are back logged and the business is complaining, try pushing back an agenda of redesign and restructuring the content and information architecture. Too many customizations to a system could mean it's time to upgrade and apply standards that help integrations and interoperability.

Friday, May 28, 2010

Unbundling the silos within ECM

About 10 years ago, all you heard about in IT was consolidate, consolidate, consolidate. Back then consolidate meant economies of scale, bring all website together and save money by utilizing less resources and infrastructure. This really worked in the short run, but news flash, the silos are back: they never went anywhere. The silos I’m talking about are brain trusts of business units that work as a team but don’t really share. The walls of the silos are still there; they are just within the ECM now. ECM itself has built its own silos: silos of services.

So I was reading an article entitled, “How to Save the News” in the Atlantic. It describes the downfall of the traditional newspaper revenue model with the help of the Internet and Google. But Google says it’s on the newspaper’s side. Google’s assessment of the journalism is that the “Bundling was the idea that all parts of the paper came literally in one wrapper—news, sports, comics, grocery-store coupons—and that people who bought the paper for one part implicitly subsidized all the rest…” The internet is forcing newspapers to unbundle their sections, thus their revenue cows.

The idea of unbundling got me thinking about a platform in the enterprise that is allowing business units to manage their content in “silos” without worrying about the cost anymore. The platform is doing to ECM what Google and others are doing to the newspaper industry. The platform integrates well with the OS, productivity software, and the email system. The platform offers most of what ECM offers. The platform licensing model makes sense. The platform is SharePoint.

OK. So how do ECM solution providers unbundled the stack and content that has taken years to design, develop, and deploy? Answer: One business unit at a time. Webservices and CMIS will allow for the slow migration away from the great “consolidation” ideas of the early 2000s. As migration happens, some of the standards and best practices that were too expensive to implement in the past will be implemented. Business units will be able to do what they want in their information “silo”, but will be using standards of metadata, taxonomy, security, business process, and records management. Farming for knowledge will be possible by virtue of these standards. The days of the ubiquitous file share will come to a close, the new platform that will be taken for granted is SharePoint.

ECM vendors will split up their services and sell them separately. For example, workflow services are in desperate need of enterprise integration. ECM software can and should pursue this. Also, records management should be an integrated service with SharePoint. Not the underlying repository. Unbundling and expanding ECM services will be key to ECM’s software strategy.

Thursday, May 27, 2010

Decoupling ECM

As your enterprise content migrates from one repository vendor to another, you will feel the pain of not following decoupling standards that I’m sure were implemented but not fully. By “not fully” I mean in 2001 you tried an interoperable integration only to cancel it because the APIs were limited and the performance was slow. To add fuel to the fire, CMIS is being pushed on us and that means if you’ve been following standards of information architecture with metadata, linking from external sources, training application integration, workflow, taxonomy, and databases, you’ll be fine. However, who has been able and/or could afford to follow all of the best practices? Anyone?

Below is a list that starts to detail what needs to be looked at when thinking about decoupling and preparing for CMIS or a migration.

Is the object model and attributes designed well? Are there issues with attribute that have one name, but are used for other functionality? Will system date attributes like creation date and modified date trigger any unexpected actions? For example, if you have a retention schedule based on creation date and you migrate your content (creating a new creation date), what’s the real creation date?

Links from a website via a URL that points to content in the repository will most likely be broken during a migration. Portlet Integrations that use APIs to query content in the repository will have to be checked. Content that is published to a separate website will have couplings with attributes from the repository which will break.

Links to content (SOPs) as training material that is triggered by changes in modification dates will have to be redesigned.

Relationships of folder structures with attributes will have to be reviewed.

Web services
Any dependencies with attributes and content ids will be broken.

Database Integration
Redesign any dependence the integration with repository database has on system generated dates.

Friday, May 14, 2010

Free DCTM Object Model Tool

As a developer and solution architect you know it's a pain to document the object model and attributes deploy it and then document it again after minor changes -- and there will be changes. This tool allows you to select any dm_sysyobject child type and show its attributes and up to four levels of that object's child objects. Comes in handy to visualize a repository's object and metadata structure.

Wednesday, May 12, 2010

Legal Discovery Drill

If you ever want to really drive home the need for records management and info architecture in your enterprise, run a discovery drill on the production systems. Drive it from Legal and send in “auditors” for outside counsel and work through the IT processes, testing, term searches, extracts and exports, org charts, the works.

In the least it would tick off the admins and DBA’s who have to do the grunt work. But, it could open up all sorts of discussions around the right ways to structure metadata and content such that the discovery process is more straightforward and less risky. Preserving content is half the battle; the other half is finding it.

There will always be a divide between managers who want to identify and separate content as “records” and those who recognize that the process of record identification is too much work and that all content should be considered part of the record albeit in a “big bucket” sense. For example, email that is not considered a record by a small group of records managers and is deleted could be later deemed extremely relevant to an investigation and a judge could decide to slap an adverse judgment on the evidence meaning that the jury should assume the worse possible conclusion about the email’s contents. The risk of this is weighed by counsel…

Friday, May 7, 2010

Mapping your Records Retention Schedule

OK, so you had a consultant meticulously categorize and detail every type of record your company has and how long to keep them. The consultant looked at all of the paper documents for the passed 20 years which have been stored at Iron Mountain. This you thought is the source of all records, but did the consultant cross reference these “best practice” categories/file plan with the actual ECM repository(s) in use today? Did the RRS get vetted by a representative sample of business groups from the company? Is the RRS in a form that can be easily modified during the years it might take to implement it and the constant tweaks made to it on an ongoing basis?

Let’s assume that some of these questions were considered and that you have a reasonably well put together RRS. Now let’s look at the RRS to figure out how to apply it to an ECM application. Mapping the RRS columns of information may not be that obvious at first glance. The column headers will be generic like “record class name” and “customer record class code”. Which one should be what the Records Officer sees and what should the Users of the ECM see? The “code” is usually distinct thus it should be used as the File Plan number and Record Series Indicator. The File Plan corresponds to the folder or classification and the Record Series Indicator corresponds to the retention schedule object.

When you see the columns “Event” and “Period” these correspond to the status that triggers the status date and how long to keep the content in that state. For example, if the period states ACT+INA 6, this means keep the content if it’s active and dispose of it when the content is inactive for 6 years. The event could be when the content was created, or if the content is superceded. The schedule is attached to the Record Series Indicator and might have multiple schedules per series for ACT, INA, ARC, CLO, etc.

The “description” section might have extra information which could be useful for classification terms or record series types. It should also be added to the configuration to help describe both the File Plan and the Record Series Indicator.

Do you have content types and lifecycle states?
Doc type, status, and status date are essential to configuring your RRS. These may be obvious based on how the ECM object model was designed or how workflows push content through their states, however it assumes that all your content is processed this way and most important that it is consistently processed this way. A tall order for even the best designed repositories.

User Adoption
As you map and configure your RRS keep User adoption in the back of your mind. Think about seamless use for tagging records. Should all content be a record or just some designated by the Records Officer? My opinion is that all content needs to be disposed of eventually so all content is a record. There are different levels of records importance to the company and these will be retained, monitored, and secured accordingly.

Wednesday, April 28, 2010

ECM Acquisitions: the Paradigm is Shifting

To hear that Open Text bought Burntsand, a professional services company with expertise in MS SharePoint (MS Gold Certified) and EMC Documentum (Signature Partner), the paradigm is shifting... What is the strategy behind this type of acquisition? Take out the legs of SharePoint proliferation? Or, partner with the competition now to ease the eventual shift to lighter weight, federated repositories with agile, modular services of functionality of the future? Or, is this a Canadian acquisition among friends? It’s just strange at first glance.

As the Facebook-like mashup wars continue, this may be one of many interesting acquisitions. How can a traditional ECM repository layer be repurposed into a service layer that underlies SharePoint as a frontend? A fully integrated (not EMC’s SharePoint web parts, or OT’s CLM or Wingspan’s Docway) backend, maybe a “faster” performing CMIS? Hey all the pundits said 2010’s the year of CMIS…

One obvious synergy is the need for OT to integrate its whole suite of applications and solutions with SharePoint or get fully cut out of the picture in the next five years. EMC does this sound familiar? Did you hear the shock and awe salvo being launched when SP 2007 was released? Can you see the rockets heading toward Documentum as SP 2010 is getting hyped?

As these huge suites of software get split and repurposed, ECM as we know it will evolve into Enterprise Content Services, ECS. Content will remain as king, it’s the “Management” that will be replaced by agile services which when mashed up will be greater than the sum of its individual purposes. Plug and play like iPhone apps. What will be the platform for this: SharePoint.

Monday, April 26, 2010

Some Tensions of ECM

Navigation vs. Search

Gerry McGovern says it well when he says that navigation is just as important as search: navigate first to a sub division of content, then search the index of that section. Some repositories have no rhyme or reason to their folder structures: governance and standards were nonexistent during the proliferation of folders and content. This happens when the software product was oversold and the budget for rollout planning and information architecture was pilfered for the sake of getting the ECM solution deployed and stable. The same groups who were negligent are now forming strategy groups to figure out how to reorganize the navigation and content, as well as fine tune search results. Sound familiar? The people responsible for the mess are now tasked with cleaning it up. Do they understand metadata standards? I hope so. Will they decide to reclassify existing content, migrate and tag content to a new repository, or design virtual folders? They’ll have to start somewhere.

Legal Preservation vs. Information Architecture

Understandably, the Legal department wants to limit the amount of metadata to as little as possible to reduce exposure to liability. The Records Manager wants to tag content according to preservation rules and schedules. The Users want to be able to find their content by typing in metadata values which they are accustomed using to describe their content. Doc Type, Area, Owned By, Modified Date, Reviewed By, etc. These are fundamental to all repositories, but Legal would prefer to cut them to the point of obscurity; least amount of description equals the least amount of risk.

Agile Business Process vs. IT Stability

Users, trying to keep current with their changing business processes, are constantly pressured to modify their workflows. Meanwhile, the IT department cannot keep up with all of the change requests in a timely fashion. This creates a constant battle of change management between the users and the supporters of the system. Support is typically under staffed to fully handle all of the necessary changes in an agile and stable way. Also, architects usually consider change management and support an after thought to the main design of the repository and integrations.

Friday, April 16, 2010

“Too Big to Fail” Resonates with ECM

The “Enterprise” part of ECM conjures up visions of comprehensive coverage of content, metadata, and economies of scale. Like our “too big to fail” financial institutions, some ECM systems are too large and cumbersome to deal with lack of metadata standards and content hoarding that exists in many large corporations. Can an enterprise afford to bail out a monolithic ECM system when it eventually fails? Sooner or later the system will need to go through a major upgrade or a migration to another vendor. Because the system is centralized, it will cost a lot of money and resources to plan and implement this transition.

Federating ECM
Let’s make a distinction here between having one monolithic repository from implementing agile repositories which are small and more focused on “Silos” of business unit functionality and “case” type processing. “Silos” of information have long been wrongly accused by vendor marketing of being the issue with inconsistent business process and integration. The real issues are lack of governance and enterprise architecture reviews of applications. This is why Sharepoint is infiltrating: it fills the need for business units to create their small silos and integrate them with overall standards of the enterprise. Of course the lack of governance and architecture with Sharepoint deployments will result in the same issues of “Silos” with a capital “S” as vendors have been successfully selling on.

Change Management
If change management is not considered up front during the design and rollout of the ECM system, the emphasis on standards of metadata and integration will be pushed out to later phases. As the ECM system fills with process and content without policing the expansion of content/metadata and process, the cycle of complexity of these “internal” silos of information continues. This comes to a head when the system bogs down or the centralized resources can not keep up with the business’s demands for new functionality.

The Cost Effective Way
Typically, IT will focus on hardware and software upgrades to make ECM faster or give it more capacity: add more machines to scale it, the biz will be happy with the speed. Hire offshore resources to build workflows. IT has to control costs by centralizing content management; enterprise architecture traditionally puts ECM in a storage box; metadata is controlled by silos of business. This is part of the overall issue with ECM. There’s a gulf of misunderstanding between IT and the business. Save money by consolidating, then blow tons of money in four years trying to upgrade or migrate. Does this make sense? “The cost effective way” is a myth with a short-term vision that ECM IT managers tend to by into.

I think of UIs and add-ons as the equivalent of financial instruments like derivatives. When a vendor introduces a new look and feel and different functionality, this is an attempt to create an illusion of a new application. But, in reality, the underlying information is still the same. A new UI cannot mask inefficient governance and lack of regulations. For example, as DCTM’s webtop was waning in popularity, EMC decided to create Taskspace. This UI was promoted as an easier way for a business power user to configure UIs to suit their process needs. However, Taskspace relies on sound information architecture and group/permission structures. If these aren’t implemented, a vendor will promote creating a whole new info architecture next to the exisiting ECM one. So now we have two different, competing metadata structures. Professional Services will promote the new arch as being for case management—leave the old one for regular content. Can you see where this is leading?

Default Swaps
In ECM, these are like assurances that previous failed attempts at governance and info architecture will be covered. If one project fails, we’ll buy the latest technology to swap it out on. Enterprise 2.0 will solve the current issues regarding lack of business agility and limited IT resources. Don’t think so. Application Default Swaps are common and only push the issues into the next cycle of gadgets.

Credit Ratings Companies
Lastly, let’s look at Gartner as the research companies that put ECM apps in their quandrants. They slice and dice the application’s functionality, technology stack, scalability, etc., but this obfuscates the issues underneath the hood of a company’s ailments. What good is the best enterprise search application without a consistent metadata and access control standard in place enterprise wide? Is any part of an ECM application research company’s revenue derived from the vendors? Indirectly? Hmmm.

Monday, April 5, 2010

eDiscovery Solution Drill Bits are Dull

After reading Reed Irvin’s 10 Steps to Litigation Readiness in the March/April Infonomics, I felt like I was glossed over with good intentions and a sense of sales urgency, but I was annoyed at the same time. When will Infonomics stop writing the steps and start dealing with real fundamental and complicated issues in enterprise content, process, and people? I know it’s easy to poke at articles like this which serve as an introduction litigation readiness and try to weave in experience as well. The issue I have is that it is written by a promoter of a certain product. The writing expands on the core functionality of the software and services which his company is trying to sell. It’s generic for the purposes of a “drill baby drill” mentality meant to sell services.

Here are my comments on the “ 10 Steps”:

First of all, where are some of the new prerequisites in this piece for effective enterprise litigation? Put an “effective” in front of all of these…
  • Metadata Repository
  • Enterprise Architecture
  • Enterprise Search/Index
  • Identity Management
  • Content Architecture Standards
  • Integration Services
  • Role based Retention Schedules
  • Enterprise Process Management

These requisites are essential to defending a lawsuit in court. Imagine investigating a crime, but not knowing how to drive or read a road map, you wouldn’t even know what type of crime occurred? Are the above not included because they haven’t been developed yet into the toolset? Or, because they are “too technical” for the readers? Or, not focused enough on litigation? I hope not. For example, eDiscovery products assume a great deal is working effectively with the above technology areas. Tell me one company that has IT running all of these initiative flawlessly…

Second, the big bucket, then smaller buckets approach is not addressed here because it negates a lot of the need to create an inventory, automate retentions and dispositions at a detailed level. If you don’t need to tag and control all of the insects in the jungle, then why would you need our tools? If you could start with a big bucket compliance policy and slowly identify and weed out the more sensitive content wouldn’t that make more sense? When you see an ant in your kitchen do you have to identify it to decide whether you are going to step on it?

Datamap: to make legal search easy, determine data source, figure out the who, what and why of content. This is all good, but needs a few more details of potential pitfalls to round it out. One huge issue is the legal department itself. Legal wants the map, but also has to make sure that vendor and the software being used is vetted out. This means that the process of getting approval to actually use the tools might take a lot longer than anticipated. Having an effective enterprise metadata repository, architecture and index/search in place would greatly benefit the datamap effort. And vice versa, the datamap would open the eyes of IT to the needs for enterprise level processes and frameworks.

In step 9, Reed mentions having a flexible process which is spot on, but there is no discussion of historical analysis of the data and process in order to develop a framework for change. It’s not enough to say there will be a periodic audit of the map and processes, you have to know what to expect based on the company’s cultural changes in the past, present, and future.

Step 2 is a good start but needs to be tested with scenarios. These scenarios are just as important as the policies. Using a Zachman framework may help with figuring out the details of the whens which are not covered here. In other words, what is the roadmap of this readiness? What are the different ways to roll this out? Also, access control policies and identity management are components of the policies which are usually left for the IT department to figure out, but should be detailed up front.

Getting buy in and executive support for Step 3 is right on. This is absolutely necessary and should be step one. However, the legal department as a sponsor may not have the clout needed to push all of the agenda through the organization, secondary buy in from governance groups would also be necessary. What would help tremendously is a previous litigation or a few drills that permeate through the whole company and highlight the need to get the house in order and more importantly save in huge costs associated with manual labor.

Step 4 and Step 7 are at odds with one another. On the one hand, know your users and on the other take them out of the process. Adding Step 6 (automating the process) here, what needs to happen is development of an automated process for preservation and disposition which is seamless to end users, but is very clearly understood and vetted by end users. One way of doing this is to create a pit stop mechanism which implements these policies at the integration level not the UI level. Also, if there’s an IT compliance department in your company then why not drop the process of applying retention there and add it to the other steps to adding storage to applications. A business partner within an enterprise has to justify adding storage and why not add it step to detail the retention schedule for the integration service?

Step 5, Build your team, should be a framework of roles not the actual team members which will form and disperse depending on the type of suit against the company. The in-house IT folks will need to shift priorities and delay other projects. This could create a wave of ramifications through the interrelated projects that are in progress.

The bottom line is that no one product suite or solution can possibly handle the myriad of issues involved with eDiscovery. The best case would be to develop scenario based approaches and frameworks with the knowledge that a hybrid solution will inevitably be developed. A comprehensive datamap could take years and is really an exercise in futility. The prerequisites listed above are necessary to get to the point where a map will make sense. The map will have to be a framework more than a blue print—the exact specifications will change based on the litigation at hand. Building integration services that regulate retention policies and control content processing will make dealing with the human intervention factor more straight forward and less of a burden on “repeatable” processes. Building and morphing a set of litigation scenarios all honing your processes to these will go a lot farther than buying one-size-fits-all software solutions. This may be where EMC’s Case Management and Microsoft’s Sharepoint projects collide into interesting hybrid (mashup) solutions.

Saturday, March 27, 2010

Why Financial Services and ECM is a Challenge

Most Financial Services companies large and small have really screwed up the ECM model. Years ago when they purchased Documentum, IT folks looked at the technology and installed the “docbase” server with HA and DR, and that was it. They are heavily into databases and the interrelationships of client IDs and Customer IDs, products and services, etc. IT looked around at the time and saw programmers, DBAs, and web tech guys and thought we’ll build our own frontends to Documentum, our permissions and rules for metadata are too complex, too fancy for the WDK. This was partly true as EMC Professional and Product Services never caught up to the specific requirements and functional needs of the finance industry as they did with life sciences.

Typical scenario at a Financial Company

The client applications director has been somewhat satisfied for the past four years with the way IT has maintained custom web applications integrating with Documentum on the back end. The director is getting more heat from above to look at cheaper, OOTB Documentum solutions. The IT department development (even outsourced to India) is getting too expensive. IT has been planning and executing an upgrade for the past year and a half. They are still at 5.3 (not supported) for the repository and are upgrading the website DFCs to 6.0. Why not 6.5? Because at the time of planning 6.0 was just coming out and the client development started. This is way too expensive and lethargic, but what are the alternatives?

BSAs with their head in the sand

The Business Systems Analyst Manager, with his MBA says we need to think about the big picture, but deal with the low hanging fruit now. In other words, there’s no bridge in communication between operations and IT. This is where a Deloitte is usually helpful in terms of getting the too sides to agree on long-term goals. I know consulting firms can be devastating, but in this case, all the MBAs need to get together and show off their power points. It’s not about just fixing one issue then the next. It’s about getting the CIO to wake up and figure out ECM should be handled like it’s ERP. There is usually a disconnect between the Client App Business Ops vision and the IT technical vision. This will not get fixed at the director level, as much as the C’s would like it.

IT is RESTful with their plans

Historically, the IT Internet/Extranet group has run the show. Choosing how to architect the ECM solution as a by product of their database and web application server architecture plans. Just figure out the storage of content and keep it forever. Here’s the argument: we already have custom applications for Documentum that work. All they care about is the “harvesting and production” side of content. They have no clue what the business is actually doing. How do we get IT to feel the pain of the business? A strong Client app director who cares about business process and content management would help. Get the IT managers out of the web services cloud and either do something enterprise wide with CMIS and other integrations or work with the OOTB products.

EMC Professional Services Where Are You?

The problem with EMC Documentum professional services is that it is not engaged fully during the pre sales activities of the sales engineers who are building POCs and demoing new software solutions. If they are they’re spread too thin and are not effective enough. They can’t keep up with actual work let alone consult on goals and vision. Most companies form their opinions on the can do all sales guys. The client is overwhelmed with products not solutions. They see scanning solutions, but not high end UIs which solve the complex financial relationship issues. A WDK or Taskspace form is a child next to the maturity of what IT has created for their highly customized UIs. EMC, what’s the solution here?

Sample Financial Solution Patterns

There are many solution patterns within the financial applications that EMC could solve with a product.

  1. Center a User Interface around Client and Customer Ids. The pre vs. post sales information gathering activities.
  2. Deal with the historical changes in attributes and their values. Financial services databases are always current and don't necessarily handle change management in content and metadata very well.
  3. Like the case management solution, figure out the foundational use cases of financial services (beyond scanning and OCR) and build product and solutions from them.

Monday, March 22, 2010

eDiscovery’s Promise Is Dependent on Your Enterprise’s Laziness

eDiscovery solutions are predicated on your company’s lack of cohesive and best information management practices. If you read any eDiscovery software solution’s proposal it will not mention how to fix the issues of inadequate metadata structures, lack of auditing, risks of pre-ediscovery spoliation. They will not tell you to get organized and clean house. They’ll say information governance is impossible to keep track of, that information growth is out of control. They’ll all be lawyers and MBAs and be able to pontificate extremely well about your issues, but do they really want you to help you where you really need it?

If I was selling software they was really a search tool and a legal CYA process would I be interested in telling how to create an Enterprise Metadata Repository? Do creditors want you to save money? Managing your content requires standards, conventions, governance, and compliance. MoReq is an EU set of standards for records management that basically outlines the keys to eDiscovery freedom.

Without the discipline and perseverance that is necessary to push these throughout your company you’ll be stuck with an eDiscovery application infiltrating your email, ECM repositories, file shares, etc. It’s a matter of how you want outsiders to carte blanch access to your information. If you know where everything is then you’ll be able to clearly define and execute the information discovery without a whole sale fishing expedition on all of your seemingly sensitive information. Most importantly you’ll know what needs to be disposed of!

So what’s it going to be a house of hoarders, or an extreme home makeover?

Friday, March 19, 2010

Architecture without an Effective Enterprise

So what are some of the issues that arise when a large company does not have an Enterprise Architecture group? How could this happen, especially in a company which is dependent on validated systems and many different and competing standards?

  • Business Partners are frustrated with the lack of governance and execution of projects that matter to them. The project pipeline for smaller projects is large. The large projects get funding to trump the smaller ones, pushing the smaller deployments back by years. The Business threatens and sometime does build its own solutions.
  • Project liaisons like Business Relationship Managers, Project Managers, and Business System Analysts are overwhelmed and focus mainly on timelines, milestones, and deliverables without knowledge of other projects and synergies that might be available from an architecture perspective. There will be different levels of training and experience across application specific groups.
  • The Service Delivery Team lines up resources and figures out who can do what, when, and thinks everything else will just have to work. The issue is that other outside team resources may not be considered by the Service Delivery Team. For example, the infrastructure team may be working on 10 other different build outs and there’s no priority.
  • Solution Architects will clash with Network and Storage Architects who are blindsided by project changes and overwhelmed by lack of resources. The documentation of each group is not consistent. There will be no overarching document template standards across the application stacks. There are 15 different icons in Visio diagrams that represent servers.
  • Large Consultants will infiltrate easily into the heart of ERP and ECM and try to get into Legal and Records Management. These consultants will suck big bucks out of IT’s coffers. Knowledge transfer will be an after thought and spotty at best. The stability of process and deployed applications will take much longer to achieve.
  • The Integration Team will be very busy patch holes with webservices and other protocol, but there is no EA team looking at the whole wall of patches to determine where the future cracks will be. Each patch is customized with different metadata requirements, no common values, no Moreq2-type analysis. The testing of upgrades across integrations is too little, too late. For example, there is a general lack of awareness of the full impact of changes made in Active Directory.
  • There is no overall orchestration of applications, integrations, standards, ROI, and cost controls. The budgets for projects and their corresponding SLAs are out of balance. The downtime for upgrades blows away the accepted downtime window.
  • Priorities of projects lack coordination of architects experienced with building platforms which scale. IT Governance touts its own top 10 projects, while ignoring the priorities of the business partner’s projects and initiatives. For example, deploying Sharepoint 2010 with complete confidence in the first release vs. holding off on a major ECM upgrade until the first or second point release, even though the business is clamoring for the new ECM functionality, eg. Center Stage… Another example would be launching a large project with two versions of the same application because in the time it takes to deploy a new version is up and user are clamoring for its features.
  • Records Managers and Legal Departments will be forced to drive metadata standards at the enterprise level. They will most likely be stonewalled at every turn, especially by IT.
  • The Infrastructure architects are explaining too much and doing too little. One person questions the amount of storage required because of cost and another says don’t worry about the money. The racks in the server room have blades tucked into spaces that are not shelves. The power supply of the server room is maxed out. There is no coherent Storage strategy for expansion. Records Management and thus disposition is far down the road.
  • There is a high churn rate of smart developers and architects. There aren’t enough incentives to keep them. They are too busy putting out fires so they quit and move to companies with EA teams in place.
  • These are some of the issues that arise when a medium-sized company grows into large one. However the EA team should be in place from day one. All companies should recognize the importance of standards and centralized governance of IT architecture.