Monthly Archives: May 2014

Account of an experience in automation

This post is in coordination with @kb2bkb and @Honer_CUT

The Task

In Scotland, the NHS receives more than 8 million claimed items (medicines, appliances, bandages, dietary foods, etc.) each month from pharmacies which have to be reimbursed. At the time I joined the project, there were more than 500 trained data analysts employed to “price” items from paper prescriptions.

The Change Programme, in which I worked, introduced electronic messaging for this data, XML messages with detailed prescription and dispensing information from GP and pharmacy systems. One of the goals of the Programme was to automate the pricing of items. When I left the programme (March 2014), around 83% of items received electronically were processed automatically, i.e. paid without any intervention or check of a person. Over the first few years of automation, 1 million automated items were checked manually. No false payment was found.

The Strategy

Prescription Data Analysts follow a set of rules from multiple sources:
• Clinical safety and dispensing policies
• Remuneration and reimbursement policies
• Data processing rules
Some of these rules are context sensitive.
The NHS maintains data dictionaries for all items that can be prescribed or dispensed, including attributes for rules and policies. (See SNOMED CT http://en.wikipedia.org/wiki/SNOMED_CT for a basic concept of a such a data dictionary, then imagine 10 years of additional ideas bolted on.)
Tailored versions of these dictionaries are used by GPs, pharmacists and in NHS Payment IT systems, i.e. all data is exchanged using (mostly) known classifications and descriptions.

Calculation payments for items appeared to be a Complicated domain problem (using Cynefin). It takes a data analyst around 7 years to become an expert in the domain. In such a career the analyst will have seen between 100,000 and 150,000 distinct items. Even then, analysts have very few stories of items where payment could not be calculated correctly following the known rules.

Before I joined the project, it was decided to use a rule engine to achieve automation. During my work, we built an application optimised to exercise a decision tree using the message data and an enhanced version of the data dictionaries, then continually added support for new scenarios. We also built applications needed to support the maintenance and delivery of said enhancements to the dictionaries.

Limitations

frequency per item

The graph illustrates a key characteristic of the data:
A very small number of items are dispensed at a very high frequency. Early versions of the rule engine targeted those high frequency items, which enabled us to stay well ahead of expectations for quite some time. But as we moved into the part below the “knee”, we realised there will be a point at which improvements could only be made by significantly increasing the manual work for those enhancements in the dictionaries, i.e. to reduce work load in one area, you increase it in another. It also looks evident that at some point, the maintenance work will be greater than the work saved.

At the time I left, we still received a lot of claims only in paper form. Automation of the work overall was 65%. This will be improved by making electronic submission mandatory.

Reflections

There are ten thousands of “items”, which are dispensed less than 10 times each month. The data for many of these items changes every few months. Maintaining the meta data for some of these cases is more work then the actual pricing of them by an analyst. Does that inversion of the proportionality between automation and maintenance effort in the long tail hint at approaching complexity? Or is this a nice example of the value of an expert in a complicated domain in view of automation?

The initial assumption at start of the Programme was that eventually, automation would be well over 95%. At the time I left, they projected a possible 85%.

Given that there will always be data analysts, could we have taken a different path to automation? I toyed once with the thought of a “brute force” engine. If we would give the 1st occurrence of each data scenario for each item to an analyst at the beginning of a month (rules change each month) and then just replicate the outcome for each repeating case of that scenario, i.e. each month the analysts train the automation engine one scenario at the time, what automation could be achieved?
I could only do a couple of crude experiments with limited data, but I believe this approach would have worked better.

 

1 Comment

Filed under Uncategorized