Sunday, May 29, 2016
Too many times the potential of your ECM rollout gets sidelined, or delayed because of budgetary issues. The right people have the right skills, the infrastructure is in place, but the needed module license was not in the budget. If you are lucky the management is onboard and willing to take the flack for asking for extra money, however this is unlikely.
ECM systems have a lifecycle just like all other software solutions. It could be that a new director is hired and she wants Hyland OnBase instead of Documentum for scanning. If you are in this type of situation where a new system is in the budget plan, then get to work with that 5 year plan.
Any project that includes converting paper or processes into a software solution has plenty of ROI, you just have to know how to calculate it. This is essential to justify funding the solution’s full lifecycle of licenses. Get as many diverse measurements of costs and savings as possible. Too little attention is given to this. All you have to do is talk to the people who are pushing the paper: how much time is spent trying to find stuff, what kind of decisions wait for this, what is this time worth? And so on…
It's easier to buy all the modules up front as one large budget, than it is to piece meal them in later. A module for $15k is nothing in an overall expense of $500k, however, two years later, $15k may not get approved as a standalone budget item. The shine of the new ECM solution wanes as the years go by. The big ROI has passed, it’s just harder to justify. If possible, negotiating all of the enterprise licensing up front would be a better strategy than waiting for buy-in a few years later.
If the nature of the solution changes, it's easier to exchange modules, than it is to buy new ones. Let’s say that one module was never implemented, but a new mobile module is part of what your CIO wants. Exchanging licenses could be easier and quicker than asking for the extra, emergency budget request.
Purchase the API licenses all at once. They are usually less expensive than the packaged solutions, but can be very useful as the solution matures and integration is revisited. Build in flexibility wherever possible, this means have the option to create minor customization when needed.
Sunday, May 22, 2016
In general, the naming convention should entail enough descriptive qualities to make it obvious to which group the content belongs. Being able to quickly identify content context is important. The basic building blocks should follow how the organization’s security model is structured. For example, if the company uses Active Directory and assuming it is representative of the security structure, the names usually follow a convention:
Not all of the above keys have to have a value. This is a guideline which helps focus the naming of the objects and structures that follow. Keep in mind that exceptions are always part of a naming convention.
Naming conventions give content a location and relevance tag. It tells the User what it is and where it belongs. By consistently following a convention, the system will be able to scale and still be coherent.
Each department has its quirks as to how they work. It is important to hone in on how they describe their content with the enterprise in mind. Try not to spend an inordinate amount of time splitting hairs over details. For multipurpose departments, move the structure out to a general level which encompasses all of their responsibilities, then work on describing each path, but keep it simple. Think about how the names will understood by Users who are not privy to your knowledge abbreviations.
Every organization uses common and idiosyncratic abbreviations depending on its industry. The challenge is to be concise, yet clear. Some abbreviations can be too short and cause confusion. For all naming conventions it is critical to be as concise as possible.
Focus on the core convention and structure and note the exceptions. The exceptions should be handled by metadata or other means.
The main assumption is that the naming convention will be followed and enforced. Each department should be given some control over how they want to describe their content, however, there will be some common structures that will be imposed.
Before creating your own convention, check with other sources of content to make sure there is no convention in place. If there is, compare yours with theirs to see if adopting theirs makes sense. Maybe the first two levels should be incorporated, with the third being what you focus on. The values should be abbreviations if possible and obvious in meaning.
(Area)-(Department)-(Control Number)-(Doc Type Identifier)
This can be thought of as a way to get to your content. It could be a folder structure, or a cascading categories. Be careful to not go too deep; at some point metadata will take over in describing the nuances of content.
(Specific Level 1)/(Special Level 2)…/
A “Specific Level” could be a functional or category/doc type pair.
The naming of doc types should be clear and concise. It should be obvious what department and function they belong to.
(Department)-(Functional/Category Name)-(Doc type name)
File names become important during normal file exporting and migrations. When content is used outside of the system there should be identifiers that help place the content in context. There could also be a reference back to the system’s numbering system.
(Department)-(Doc type name)-(Content Relevant Identifier (for example Title or Patient MRN))(System Number).(Format)
Title and Headings within Document
The title should be concise, especially if it will be in the filename. It should also reflect any metadata values associated with the document.
Document Information Block within Document
A block of information as a header or footer to a document is a feature of paper-based control documents. In the block you’d have the Title, Doc Number, Doc Date, Doc Affective Date, and so on. This is metadata for the printed page. If there is the need to print a footer with this information still makes sense, but the content within the document should not have this, it should be implemented only during printing or saving to a file outside of the system. The properties of the exported files could also be used to metadata population.
System metadata already follow a naming convention and usually are proceeded with “SYS” to denote them. The corresponding database tables follow functional naming conventions which are sometime cryptic, but logical.
The Dublin Core is a common set of metadata of all ECM systems. These include author, doc date, description, name, etc. Don’t duplicate these unless there are naming rules that are different.
Keeping track of which metadata is used for which purpose or application can be challenging as the system grows. Naming metadata specific to its purpose and project is advised.
(department)(project name)(metadata name)
References or Relations
If a cross reference is needed, make sure the object naming is consistent with the purpose of the link.
Business processes or workflows naming need to follow the same naming conventions as all the other preceding objects. It should incorporate the common elements as well as the specific ones.
When the ultimate goal of the ECM workflow is to publish to a portal, the naming conventions should follow the same conventions as the portal. Having to map or lookup values may not scale when added to a portal’s level of use.
As mentioned above, the security hierarchy and its naming convention are a first indication of how well organized the company’s structure is. The naming of the groups should be considered when thinking about the naming of projects, folders, and doc types.
Many systems have to comply with regulations like 21CFR part 11. This type of scrutiny applies to the information architecture and system’s content.
Outside regulation bodies could impose certain naming conventions which need to be followed.
Auditors need to be able to ask for information in general with an understanding that you will know what they need. It’s vital to only search for what they focusing on, and this requires a good naming convention and robust metadata.
Introducing a foreign language to the system multiplies the complexity of the solution. Not only are the metadata values multiplied, the naming conventions are multiplied as well. Most ECM systems can accommodate this, however, folder names and metadata might have to the duplicated with the foreign values.
Search is only as good as the metadata value quality and full text indexing comprehensiveness.
As organizations change, areas and departments get moved around and new names are designated.
Keeping track of structural changes can be challenging. Depending on the scale of the change, creating a map to previous taxonomies or department names can be helpful.
Thursday, May 5, 2016
If your CTO says it is “impossible” for the hyperconverged cloud to go down, you know you and everyone else will be in for a long night at some point during the cloud's stabilizing period. Nothing is infallible, not even the cloud. If you are pushing the technology edge, then you need to own up to the inevitability of a confluence of issues. So you have to ask yourself, “What steps would have to be skipped, or overlooked, during the design, development, and implementation of a cloud system to get to the point of an emergency downtime of your fool proof network?”
Hypothetically, let’s say one bug in the software could blue screen all of the domain controllers in every redundant location at the same time. There are a few points to consider when reviewing this type of failure:
The inexperience of those in control at the technical and the blind faith managerial level
With new technology even the experts make mistakes. When the outage happens, are the persons caught in the headlights fully trained and part of the initial design and development, or are they the “B support team”? This is a critical mistake made over and over again, by IT leadership and financial stewards, where it is deemed okay to bring in experienced consultants to design and implement a new technology solution and then leave it to the less experienced support to team to maintain and upgrade, without proper training and onsite support.
Lack of resources to provide an acceptable factor of safety
In the rush to curtail costs, the system suffers. The “secure and agile IT services” cloud is not a one off capital expense. Cutting operational costs too drastically will show its shortcomings in emergency outages and other incidents over time. As with any system, the change must be methodical with a factor of safety that is understood by all business partners. It’s no excuse to cut corners because there’s “no budget.” Try saying that to a surgeon.
Make sure someone is always accountable
In many cases, the business is cajoled into taking what IT says for granted, but when the system goes down they might be surprised to find out that no one is ultimately held accountable. “Virtualizing and hyperconverging its data center” also could end up virtualizing the accountability of the system, which in turn means that a Root Cause Analysis will never fully explain what really happened, if it ever gets sent out…
Lack of decoupled, identical Test environment
If your company cannot afford a decoupled test environment that mimics the cloud set up, it is adding risk to the implementation. The vendor should at least provide a comparable test environment to test bug fixes and service packs. If you had this and the outage still occurred, this points to the infrastructure team, their manager, their director, and ultimately their CIO.
Cognitive Bias toward “If it runs, don’t upgrade”
There can be a bias with some CTOs to only fix bugs with bug patches, and to never upgrade the virtualization system software unless the infrastructure requires it. In the end, “hyperconvergence” is a term that is meant for theoretical analysis, not ROI, because the hidden costs of implementing this new technology are everywhere, you just have to know where to look. Also, the risks for implementing an internal cloud are greater than going with the established, large cloud services.