Monday, December 28, 2015

I'm Not Just An ECM Flounder!

Hi, I’m not just an ECM flounder at the bottom of your information ocean. I’m affected by any changes in how or what information is dumped into the sea. Let’s look at some of the ways that my environment gets polluted.
According to CIO’s Howard Baldwin, here are some IT complexity issues that should be looked at in order to decrease pollution levels. Baldwin calls it “complexity”, I call it pollution. As a bottom feeder, I need a lot of coordination and cooperation from above so I can live and prosper. I’m only as fit as the polluters will allow me to be. If I’m not in shape, I can’t innovate. Sorry about that last rhyme, unintended…

Documented Knowledge

How many application integrations are undocumented? Or, documented years ago and not revised? Every undocumented process turns into an oil leak into the ocean, and I have to deal with it during upgrades and new development. I know I have to make sure everyone is aware of changes I make to the ECM system, but same goes for all you applications at the surface.

Consolidate to a Cloud

As an ocean dweller, I would have no problem if all information was filtered in the cloud. This would necessitate more accurate requirements, functional specs, and testing documentation. The rain would be less polluted. The oxygen levels would be excellent. The bottom would be less murky. The ongoing maintenance of the infrastructure and integration points would drop considerably.

Complexity ebbs and flows

Like ocean tides, information inputs and outputs ebb and flow. Complexity can accumulate quickly with director churn, as they need to prove themselves and usually build new applications beside existing ones. Talk to anyone dealing with SharePoint beside an existing ECM solution and you’ll get an ear full.

Saturday, November 14, 2015

Hyland Onbase’s Patient Window = Silo Buster?

As a physician, when it comes time to view images within their hospital’s EMR, the experience can be frustrating. Hyland OnBase has a solution for aggregating these disparate imaging viewers (DICOM and non-DICOM content) into one application: Patient Window.

This solution is described with words like “Unify”, “Connecting information silos”, “seamless integration with EMR”. As a certified engineer working with OnBase working on Healthcare solutions, I applaud this effort to truly help physicians find their information efficiently. The demo looks great, but what about the implementation?

So, how can this silo buster get traction within the fortified walls of an EMR’s embedded imaging viewer? Behind these applications are dedicated teams who have spent years implementing these point solutions within the EMR. Chances are good that these teams are managed within the EMR’s umbrella. This means the ECM's team manager will have to have discussions with the EMR's team manager to negotiate how to "unify" these applications.

Showing the image for the patient’s case is the culmination of purchasing the machine, licensing and configuring the software, testing it, configuring the manufacturer’s viewer to be embedded in the EMR’s software. The team that did all of this work, had many meetings with the informatics team to review the physician’s requirements and industry standards, with the EMR team to review how it should be setup with the EMR’s solutions, with IT’s infrastructure and security team to validate the integrity of the solution, with the interface team to make sure that patient information integrated correctly, and so on.

The silo is not just the information, it’s the whole process. Process reengineering is a key part of implementing this type of aggregation. I’m not saying it can’t be done. What I’m saying is that, unless there is already an EMR process overhaul taking place, there is a mountain to climb here.

Sunday, October 25, 2015

Zombie ECM

ECM: not alive and not dead, walking towards sounds and smells. On autopilot to get to the next big event. What do you have with information management that does not have adequate quality? You have zombie ECM systems marching toward something.

We like to think this something is for the greater good. A place where healthcare can find their data and get paid, where financial institutions can store their signatures, where pharma can follow the regulations. Without more information quality checks and balances, this greater good looks more like zombie land where we only do what the standards and regulations say, nothing more. We trudge along and wait for the next upgrade, for the next event, the next big thing.

Does everyone feel that they are doing the best quality work on implementing their systems? What would be a better way of designing ECM? How can you improve on how your process works, how solutions get implemented? Why doesn't Gartner have an info quality quadrant their ECM leaders?

Friday, October 9, 2015

“One Patient, One Record”

This healthcare IT tagline has been around for a while. Software vendors and healthcare provider IT groups, touting this slogan, are definitely aware that it over simplifies a patient’s information trail. There are many unknowns that could splinter that “one record” goal. For example, what if a patient goes to a different hospital? What if the patient’s insurance changes, what if any number of issues could happen to the many systems used to keep track of a patient’s registration, appointments, orders, results, ED visits, discharges, etc.

Patient is a person not a chip

Patients being human can inadvertently cause havoc on their own information. It could be as easy as changing your baby’s name or moving to a different healthcare provider. The information that follows the patient me get split into two separate buckets, orphaned by glitches in the systems. Regional information exchanges can help, but typically do not house all of a patient’s record, only the current, actionable data. What happens when patients move and then return to the same hospital years later? I can tell you that merging patient information can be very tricky.

Release of Information—can you be specific?

When a patient requests all of their health record data, do you really think all of it is collected? If so, how is this verified? Are there any regulations that state this type of request has to be complied with fully? Essentially, the patient has to fill out a form and select which information he/she wants, with an “other” line for her to fill out. This could leave many documents behind because the patient has no idea how many types of documents are in a healthcare ECM system: hundreds, if not thousands.

Every Visit to the Doctor’s Office

Every visit to a provider generates thousands of records. There is the registration which gathers insurance information, signatures for consent, appointments. There’s the nurse who enters current information about allergies, weight, body temperature, blood pressure, etc. He may or may not have paper forms as well for the doctor. The doctor opens another form on the computer, shows test results maybe, enters in clinical notes, writes a script, enters in a diagnosis code, and so on. Many of these entries are split up and branch to other systems for further processing.

From One Record to Many

A patient’s healthcare record will never be “one” record, it will always be a tree of information, or many trees of information, connected by one or many medical record numbers. A regional exchange will hopefully help to cull a patient’s information trees together. Maybe a national register would help as well, but this is way in the future. Sharing health information transparently would lower healthcare costs, however it would lower profits, so there will be lots of feet dragging before “One” record is achieved…


Saturday, October 3, 2015

Hallway Chat Projects

With all of our productivity tools we use to communicate and make decisions on projects, hallway conversions can still drive the everyday decisions and information sharing. The problem is that there’s no way to forward a decision or share information beyond the folks that have the conversation.

Sometimes the conversation is summarized, but usually it turns into action and surprises other team members. It’s the surprise that is a sign that the vehicles of communication of a large company are still not being used. The small company mentality is still there.

Small businesses that grow to large ones slowly have this issue more than ones that grow quickly. The slow growth makes it possible for the relationships and behaviors to stay the way they were without too much change. Fast growth usually brings in MBAs who have learned the structure and designs of past organizations that have “worked”. The case study of fast growth breaks up the old patterns of hallway communications.

So, applying this to IT projects and large project communication, you can experience the symptoms of this hallway phenomenon as the tasks progress beyond the planning stage. When the doers are engaged in the discussions, the scope of the project is usually torn apart. The hallway decisions are fast and furious. The project starts with members shaking their heads. The more experienced team members follow along, but secretly see the gaps. If they speak up, the project manager, who by this time has had a hallway agreement on the deadline, overrides their concerns and pledges to consider them.


The final result of hallway projects is that the project manager moves onto to a new position. The implementation of the project is done, but the fixes and stabilization are painful and disruptive. 

Thursday, August 20, 2015

From Allscripts to OnBase: A Migration Story

In healthcare, it’s about the patient’s chart, tests, and results, which all starts in the quality of the information extraction from Allscripts. Of course, when you move from one solution to another, people tend to get in the way. What I mean by this is that there usually needs to be a consultant to broker between each sides of the transfer.

Information

Requirements gathering is only as good as the person’s experience who’s managing it. A third may boast many years of experience, however, chances are good that there are a few key internal architects that need to be involved and listened to from the get-go. The mapping of patient data, doc types, and workflows are all fundamental to the success of the migration. The sizing parameters will have to be detailed, such as, number of files, average file size.

Execution logistics

As the “what” questions are answered, inevitably, new ones come up, such as “What transfer batch size should we use?”, “How long will it take to transfer each batch?” and “When will we shut down access to Allscripts?”, etc. A “who/what/when/where/how” matrix will help sort out the details. The transfer batch will depend on how many document pages are extracted and will entail multiple folders and file naming convention.

Testing

Testing and its organization are a litmus test of how well the project is managed. Hopefully all of the application’s SMEs are involved and designated Users are scheduled to test. Testing ADT scenarios and OnBase scanning and indexing are essential. Also, testing the backfill of accounts and demographics has to be done at least twice to get all of the pieces validated.

Extract

The Allscripts extract index will be delimited and should have all of the metadata needed to import into OnBase. It should have a pointer to the scanned files. These files will most likely be extracted pages, so there will need to be a unique number to be able to build the document before importing into OnBase. You’ll need the following:
Patient Demographics – enough information to validate and tie back to the ADT patient record.
Master Patient Index – the corporate medical record number to identify the extract as it is imported to OnBase
Document types – each type must go through HIM for validation. If Allscripts was used for ambulatory sites, chances are good that the naming of doc types will reflect this. You may want to just keep all the source names and prefix them with a value that allows for easy recognition. These should also be set up in a new doc type group if there are a lot of them. The “go forward” strategy of scanning should include only the unique doc types that were not already in the Onbase system.
Document formats – there may be some formats that surprise you. Import formats have to be identified and set in the OnBase import process.
List of file pointers – this list tells OnBase the page’s file system locations. If there are thousands of pages, the pointer will most likely have a folder in it that changes as the batches are imported. The filing naming convention will have to be unique and could be extracted from Allscripts’ unique file naming.

Data Manipulation

Between any focused information management system there are bound to be idiosyncrasies. For example, the patient medical record numbering scheme will be different, the rules around naming conventions of metadata, and many other conventions specific to the infrastructure of the source system.

Registration, EMR and interface

With different naming conventions of patient metadata between the source and target systems, the issues of data integrity could be compounded. During testing all sorts of issues can come up from ADT messages not having correct patient data, to patient corporate numbers being duplicates. Quality control and validation as a separate can help fix issues before they go into the target system.

Import

As we know the quality of the import starts with quality of the extract. As the patient data issues get mapped and fixed, the actual importing will have to be achieved through a third party tool or customization. Every difference that is not accounted for in the design or development will definitely show up as errors during testing of this process. Using the tool to look up values in the target system based on source values should be part of the tools requirements. The tool should be able to handle errors gracefully by logging them and queueing them for reimporting.

People

The human factor cannot be overlooked with any migrations. Both sides will have feelings though they may be hidden behind professional facades: some were comfortable with the old system and others are not happy about the new responsibilities of the new one. These feelings will show themselves with delays in development, or show stopper issues during testing. If everyone is complaining about the project manager then you know the culture is old school in that the walls of knowledge continue to be fortified, that sharing only occurs when show stoppers force everyone to open up and help each other if only for just that moment…


Tuesday, August 11, 2015

ECM Matures Beyond the Models

Gartner’s ECM Maturity model from 2012 shows the simplified journey of an ECM implementation into a company through time. The trail followed is well worn into the trained minds of solution providers. What typically happens with this methodology mantra is that it infiltrates and pervades, then when fully dependent upon the software solution that touted it, the company buys more modules, more licenses, more storage, etc.

How mature is mature?

ECM implementations reach the goals put forth be the company’s director in charge of operations. Each time a “nice to have” is overlooked or pushed to further phase, it may reach a dead end. These dead ends accumulate, but are not factored into the overall implementation; they fester and show up again when the next cycle of solutions/consolidations/open source evangelists sweep in.

It’s easy to win with a Model

I’ve seen original solution documentation show this maturity model from the beginning. The instructions are outlined, budgeted, and milestones are set. All you have to do is do what it says to do and you will succeed. Any movement forward is seen as a win. Plus, you executed on the plan, never mind that dead ends were left along the way. You can’t please everyone!

The model is the model

The model’s steps of “Initial, Opportunistic, Organized, Enterprise, and Transformative” are exactly what happened to this model. It is an artifact of solution execution, but in the end it is just another way to sell product suites and Gartner products. This model matured. A new one is coming.

Implementation’s long tail


If you are looking at models make sure the last half the curve looks like a thick long tail. This shows the correct long term implementation of ECM. It takes mature team of people to fully realize its potential beyond initial expectations and bouts of disillusionment. Over time, if you are lucky there will be enough focus on the quality of information and its benefit to productivity.

Friday, July 10, 2015

ECM Moving Across the Silos of Information Management in Healthcare


Information silos are commonplace in every industry. They tend grow around concentrations of knowledge/experience collectors and “best of breed” applications. Because enterprise content management usually spans across these silos, we as ECM solution providers get a unique insight into how work.

Take scanning solutions in Healthcare for example, you get exposure into not only financial and HR applications, but EMR, interface, and lab apps. Once you have gained trust among the managers of these applications and have implemented ECM across them, you will begin to see the potential synergies. One potential synergy could be to combine Informatics, HIM, and ECM under one director to be able to fully realize the full patient information potential.


Let’s say for this example that Informatics is underfunded, HIM is well funded, and ECM is okay. By combining budgets and focusing on common goals, the patient as well as the hospital’s image will undoubtedly benefit. By providing ECM with a direction base on requirements coming from what patients need, the emphasis and objectives will be clear and hopefully funding will be well justified and measurable. 

Monday, June 29, 2015

Post ECM Modern New New

When John Newton, Alfresco CTO, talks about the “Modernization of ECM” he takes the biggest, most pervasive view possible of ECM at an organization. The issue I have with this view is that most solutions are point solutions which may be expanding into other departments, but are mainly focused on specific solutions, not necessarily solutions that impact the enterprise. He wants everyone to think big with “millennials” using “mobile” phones “collaboratively”, “sharing docs”, using “Instagram”, and “Snapchat”, etc.

Ok, great, CIOs think big, I get it, but what happens during the implementation? Did the big idea get implemented well, or are we blaming Users for “poor adoption”?  John says, “Employees don’t buy in because the systems are cumbersome, non-intuitive, or lack support for B2B sharing and remote access.” That type of statement side steps the many bad implementations made by his previous company’s professionals. You can’t leave a company, build a better solution, and then blame the old software for being inferior. For those of us who have seen their share of implementations, we know that ECM was first CM at many of these companies. The “E” depended on the professional services as much as the software.

The “extended enterprise” beyond the firewall concept has been around for over a decade. The issues of security and sharing information are evolving and involve way more than an ECM solution’s capacity and technology. The larger enterprise is under the gun here, not the content management system.

With the “Explosion of Digital Content” as quoted from IDC sources, the “big data” issue of finding and contextualizing content will always be an issue. The point should be that the “crap in, crap out” adage is the real issue, not the system. If you don’t take the time to add context to your content on the way in, the search results later, regardless of how heuristically brilliant the algorithm, will not be as accurate as you want or need.


There’s no doubt Alfresco has a head start with open technology, integrations, and UI simplicity. I just find it hard to believe that they still think ECM is everything to everyone when it comes to content. All applications have evolved to deal with content and metadata. ECM can help patch the holes and connect the dots, and even be everything to a small/medium sized company, but with large enterprises it takes many software solutions to deal with its content and information. Whether it be financial, human resources, healthcare, pharma, registration, etc., in each case there are specialty professional services and software solutions to fit the requirements. What ECM promises is to patch the hole when a leak occurs because a leak in the other solutions will eventually occur.

Friday, June 12, 2015

ECM Safety Net

The big promises of ECM solutions ten years ago could not have foreseen the importance of risk mitigation in today’s risk adverse business environment. ECM systems have turned into safety nets for many companies. Not that information or content is in free fall, but it is reassuring to know that the location and storage of content is safe.

Many ECM initiatives were underfunded or over-architected:

Underfunded Scenarios:  when a solution performs great, but does not have a disaster recovery solution. When the version you are on is four years old. When you have more paper be shuffled than when scanning started.

Over-architected Scenarios: when it takes a senior engineer to unpack a workflow. When there’s a custom solution that does close to what is out-of-the-box in the next version. When it takes an act of congress to change a form.

ECM is also a good place to land when political maneuvering in the company causes paralysis with some solutions, or budgets get swept leaving your great idea with no funds. Catching the falling projects and at least saving them for complete disaster is at least admirable. It may not push forward the mobile agenda, but it will soften the blow when it’s budget is slashed in half.



Friday, June 5, 2015

When your system has an outage and you have to resort to paper

The more we computerize our work, the more difficult it will be to recovery from unanticipated down times. In a hospital situation for example, when your system goes down, every task done on the computer goes into manual “paper” mode. Work is done and forms are used. At some point when the system is back up, you have to deal with the pile of paper that accumulated during the outage. This paper pile could consist of notes, orders, assignments, referrals, medication list, etc. What do you do with this stuff?

Did you plan for recovery from paper?

Everyone plans to recovery, but the real question will it go as planned? Did everyone follow the downtime documentation procedures accurately? For example, was the account number written on the form? Can barcodes be printed after the outage?  

Scanning

Automated scanning with barcodes for indexing paper can be a life saver, however with an outage there no barcodes, which can be a major hassle from which to recover. Also, what if the accounts are still caught up in the content management system?

Processing

If you have an automated workflow for coding diagnoses or approving invoices, is there a paper alternative for this, or do you just go home?

Overtime

To recover a system by back loading information from paper, it might take a temporary surge of contractors. This unexpected budget hit should be noted as a risk in your outage plan.

Fixing data

Sometimes during an outage, only one system is down, leaving up or downstream systems running. Users could go about their normal routines without knowing for a while. Data entry could get queued up as the integration is broken. So, one process is using paper and another one is still using a computer. When the outage is over, some data might have been corrupted. For example, a patient is registered and is being seen by a nurse. The nurse fills out the patient’s drug allergies on a paper form. The patient goes in for surgery and is recovery. A physician checks the EMR for allergies and sees none. The paper form for allergies is in the chart, but the physician assumes the system is up-to-date…

The Need for recoveries of recovery


Chances are good that the downtime procedures cover what should happen and are in compliance with the auditors, however when your system goes down and the paper comes out there are many more chances of mistakes. Of course the answer is a plan for system redundancy, however, this comes at a hefty prices that not every hospital or entity can justify. 

Tuesday, May 26, 2015

How to Triage ECM slowness

Who’s the first team a User calls when the ECM application slows down? The ECM team of course. But, nine times out of ten, slowness is caused by effects of other systems. Whether it’s the database, network, or the User’s open applications, sluggish performance has many sources.

Question: who’s working on what client, in what environment, and where?

Network: we were fixing something

From Who: multiple sites simultaneously, or one building?

Large companies with multiple buildings most likely have networks that are somewhat patched together leaving some Users with networks that are performance subpar. Also, some areas of the company may be hugging bandwidth with applications that are dragging the whole network down.

Database: This won’t impact the application…

From Who: All applications that use that db server, or one application?

Typically, the database server is a shared environment thanks to our buddies who consolidated individual servers at the expense of “decoupling”. This shared environment could at the mercy of reporting for BI initiatives slowing it down. If it’s an Oracle RAC, sometimes the nodes don’t reboot as advertised. The shared environment of tier 1 applications, could put the other lower tiers at risk because the lower ones will not be the priority if there’s a business outage.

Backups: we were trying to restore another application

From Who: One application or many?

Backups might happen late at night during “off” hours, but there’s still a performance hit on databases and file stores. There’s also the possible wave of activity after a recovery that clogs all downstream applications.

Security: we were hacked

From Who: One app, or many?

With new layers of security applied comes extra processing thus potential for slowness. This is usually agreed upon at the design stages, but complained about after implementation.

Virus protection: half of our share drive files are encrypted

I’ve had many times when I’m looking for causes of slowness on my PC or on a server, only to find out that the task manager is showing a huge percent of CPU being used by the virus protection software. Hint: Double check when the full scan is scheduled.

User’s 5k open applications: who me?

If one User complains, log onto their PC and check out what applications they are running (assuming they didn’t close some while they waited for you). Try closing and opening Outlook. What’s in their startup folder? Check their browsing history for views and downloads.

Service Desk: this is a routine patch

Even when the Service Desk is being proactive with mandatory testing of patches to Windows or IE, there are always issues, especially with interaction of multiple open web browser (“no footprint”) applications.

Upshot

When you get blamed for slowness of your ECM application have a script of questions to ask to triage the issue. Check the possible larger issues first and move toward the User at hand. Slowness happens because everyone wants information faster, that is, in our zealousness to always get faster we stumble occasionally.


Monday, May 4, 2015

Unpacking a Workflow

Unpacking the idiosyncrasies of a lifecycle/workflow can be daunting. Many times a well designed “1.0” workflow will turn into a “2.0” nightmare. I have seen workflows with so many connections between tasks and rules that it looks like all the tasks are connected with no rhyme or reason.

Documentation

Good documentation can provide an excellent resource and general direction of the workflow. However, with large process automation projects, the documentation is usually provide for the initial implementation and left stranded thereafter.  Determine how obsolete the information is before reading it too closely.

Structure            

Worse Case: Over time, workflows can turn into spider webs. Once they are built it becomes easy to make small tweaks and fixes. These work great and the business is happy, but in the long run can get out of control. How do you document “small” changes?
Best Case: A solid naming convention makes easy to traverse through the decision branches.
Look for repeating patterns of rules and actions. These could help minimize what looks complex but is actually a bunch of redundant activities.

Naming Conventions

Hopefully the naming of the lifecycles, work queues, rules, actions, properties, etc., makes some type of logical sense. A lot can be implied in a formal name convention, for example, a lifecycle name of “HIM - Validation” which shows the department (“Health Information Management”) and the objective (“Validation”). Depending on the workflow application prefixing with “smart” hierarchical keywords is best practice, so hopefully they did it!

Search

If your workflow tool has the ability to search for specific component names and presents the results in an actionable way, figuring out workflow specifics just got easier.

Rules and Actions

Rule names can be precise and consistent, or obsolete and sporadic. If you have the former, you are lucky. This goes for actions as well. Both of these may be copied (linked) to other locations, so be careful of making changes in one spot only to be burned by the other part of the workflow that just broke.

Properties and Keywords

Many workflows rely on saving status and regular expression values to the document’s attributes/keywords. It’s important to create a matrix on how these values are set and where they are used in the process. By “properties” I mean the temporary values that are used by the workflow for calculations and routing (and logged) by not saved to the document.

Logs of workflow actions

By viewing logs you may be able to decipher some of the paths that documents take into and around the workflow. Depending on the settings, these traces might show what actions took place, when, and by whom.

Decision Trees

These are made up of combinations of rules and actions. Rules usually have a true or false outcome. Building complex decisions can mean stacking many of these rules together to provide an eventual processing outcome.

Visual representation

Whether in a tree format or a more graphical view, following the progression of a workflow visually can be very useful. With complex workflows this view becomes too chaotic looking. Splitting up the visual representation into meaningful chunks may help figure things out.

Users, Groups, and Roles

Listing out groups and roles, and matching them to activities will show certain expectations of actions. Each group/role will have permissions and privileges which will provide a clue as to how the users interact with the processing of documents.

Document flows

Running documents through a workflow is obviously one of the most practical ways to decipher workflow behavior. As an admin user keep in mind that you will be able to do and see many parts of a workflow that Users won’t be able to see.


Figuring out what a workflow does can be a very frustrating activity. Hopefully, some of the above techniques will make it more tolerable.

Tuesday, April 7, 2015

Backup the ECM Dump Truck

Enterprise Content Management systems that are primarily used to store files for retrieval can be thought of as a land fill. A content dump is a place where you put stuff that doesn’t go anywhere else. The content might as well be in a file server because it is not being enhanced – back up the dump truck.

Financial companies like to connect their transaction based applications to supporting documents that reside in the ECM. They purchase big name ECM systems to get assure that SOX regulations are covered. The only link to the content is by a common ID. The content itself is isolated – back up the dump truck.

Pharma companies use ECM’s for the FDA approval stamp. They can push controlled content through and be done with it – back up the dump truck.


In Healthcare EMRs, content is king, except when it costs too much to convert the paper into out-of-scope electronic forms – back it up.

Supposedly these dumps will be sifted and organized by BIG data. Like real land fills, after capping them you can tap into the natural gas...

Wednesday, March 18, 2015

Too Busy to Succeed

If you are trying to expand your ECM system to new areas of your company, here are some of the push back challenges you will experience:

We don’t have the time

Any department that doesn’t have the time is probably spinning its wheels with inefficient processes. They are at the tipping point of productivity, but any change feels like it would be too much to go forward with it. Showing a prototype may only confuse them and make them more anxious. It takes a little gumption and risk taking to push the ECM solution into new areas. You will recognize the “don’t have time” folks by their techniques of not showing for meetings or leaving before the meeting ends.

We don’t have the in-house expertise

This is a common circular reference usually in conjunction with we don’t have a budget for this. More than likely, management didn’t think through the finer details. The expertise was probably in-house and then left to green pastures (I’m not cynical really). So, the team is left with outsourcing any technical work for applications that to integrate with your ECM system.

Do you have a budget for this?

Most departments will not have the appropriate knowledge as to the benefits of the end state of implementing an ECM solution. Whether it’s scanning, workflow, or electronic forms, etc., they will need to have a hands-on type of demo which serves as a reference point to understand what they will get out of the solution. You will be asked if you have a budget, but that’s not the point. The business has to drive the requirements and budget not the IT department. I’ve seen too many solutions fizzle because IT was trying too hard to promote what they thought the masses would like, AKA who’s using Sharepoint?

Our manual process works fine

This is a big hurdle to jump over. If faxing works, why spend $50k for scanning, licensing, etc.? In many cases, you have to expand the scope of a solution to include integration possibilities to be able to convince the business that their manual processing could be drastically streamlined and their expenses cut in half. Process automation for order tracking and billing pays for itself in a year – depending on the size of the implementation, but you get my point. When the cost savings are explained it’s usually a no brainer to get the solution implemented asap.




Monday, February 23, 2015

Ghosts in the ECM Machine

What would make knowledge workers refer to data corruption as “ghost” information? These ghosts are anything but scary. They float in and out of registration, accounting, EMR, and ECM systems. The team that introduces the ghosts is the one that busts them. As information changes, this team updates key metadata in one system, then the same in another, and so on. While making the changes to one system, other users are processing information in another, causing issues with data synchronization.

In a perfect world, a service hub would be managing the synchronization of data and would be considered the source of record.  The individual systems themselves would be equipped to validate against this source of record via another mechanism, such as a database table, to be sure to get the correct information, especially if the service hub is down.

But we don’t live in a perfect world and there never seems to be enough money in the budget to do the right thing in IT. The blame for synchronization issues is not easy to pin point. It is easy to point the interface team and to say it’s their fault, however, it’s really the enterprise architecture that is to blame. Chances are good that this type of “ghost” occurred during a time when there was no one holding the architecture reigns at the enterprise level.

Trying to push a fix to this ghost throughout the enterprise will prove to be challenging. At the point an issue is called a ghost, it has become institutionalized. This means it has been baked into the psyche of the knowledge workers. They take it for granted. “It will cost too much to fix it,” say some; “Good luck moving it up the priority list,” say others.

If there is a change management policy which managers hide behind to justify the status quo, try to find holes in the policy: Is it up-to-date? Are the original signers still working there? Policies should be reviewed every year or so: has it been? Is this how other systems like the accounting system operates?

Sunday, January 18, 2015

Going Beyond ECM Nostalgia

The days of big, expensive Enterprise Content Management solutions are over. These were the golden days in the late 90's and 2000's. Now, we are all aware that information and content are interchangeable: scanned images need index values, general ledgers sometimes need invoice images, EMRs need links to images, process automation needs it all.

Facebook shows us one way to mash all of this together, but it doesn't have the industry focus, legal regulations, or corporate learned behaviors. Every application has content and information management. ECM might still be on the bottom system layer, filling in the cracks, but it's purpose has changed. It is either focused on specific industry solutions, or acting as a change agent to sweep up the final paper processes. Whether it will be used in the long term doesn't really matter. What matters is that it still has an important role in IT.

We all content manage
We all move information
Our end goal is efficiency
Helping communication along the way
Yet truth struggles with greed