I know, there is no such thing as 100% quality, but we need
to challenge ourselves to get there. To catch potential data quality issues,
you’ll need to create a set of validations, added to a process that will
identify and fix them. This set of
validations can be automated within a workflow, identifying rules and actions
to perform given certain conditions. Finding and fixing issues is an ongoing
task: it requires a balance of vigilance and curiosity, as well as caring.
Usually issues come about because there is spotty accountability somewhere in
the flow of information from the source to the downstream systems.
Goal
Close any data integrity gaps by applying validation checks
and fixes.
How does this happen?
It can happen very gradually and surreptitiously. Within a
company, unless there’s a strong information quality department, there will
inevitably be data inconsistencies because each department has different
priorities and validation requirements. All it takes is one form in one
application with lax data input requirements. It could also be a lack of
validation check during data input. Lastly, when software is upgraded or data
is merged from one system to another, we assume wrongly that one source data is
fully vetted.
Examples of how this happens
Downtime
Let’s say your system has an account lookup feature, but
that feature is down, so you have to enter in the account information manually.
The feature is fixed in a few hours, but by then you’ve entered 100 accounts.
Does this data get validated later? If it doesn’t, a downstream application
could have quality issues.
Patient information merges and updates
Lets’ say you work at a hospital. There’s a patient referral
with the same medical record name (MRN) as an existing patient with the same
birth date. The referral is entered in at the existing patient. This error is
caught later and the patient information is fixed, but did the patient already
get treated?
Towards Quality with Process Automation
By inserting workflows into the process, specific types of
data inconsistencies can be identified, investigated and resolved. Below are
some general design components for building a quality validation workflow:
·
Figure out how to funnel all data/content
through the validation workflow. Using by doc type or input sources, the
information can be collected and filtered as appropriate.
·
Create the rules to route issues into buckets.
Here are some typical queues:
o
Routing: this queue has validation checks which
compare metadata values against source of record values.
o
Issues queues: these correspond to the common
issues that get identified.
o
Routing issues: this queue holds any doc that
doesn’t match the issues queues.
·
These buckets can be evaluated during their
initial manual fixes for potential automated solution, and identifying
upstream, root cause, data issues.
No comments:
Post a Comment