Crude data is just crude


Just a quick follow up from the first post regarding education and quality improvement. The emphasis needs to be on the search for the right metrics when it comes to quality improvement in general, not just in clinical education (the two should be inextricable). What we want, is to decrease potentially preventable adverse events and to meaningfully know what these are, we need to have quality audit tools. Crude data is rarely useful.

Number of falls per year (crude and rubbish);

Number of falls per 1000 occupied bed days (better, but still leaves heaps to chance);

Number of potentially preventable falls per 1000 occupied bed days (pretty damn good and can be compared from one institution to another).

http://s4.hubimg.com/u/6498639_f260.jpg

The problem is, this form of data is more difficult to derive, in that we can’t just go to clinical records and filter for “fall”. We need to actively complete a post falls audit every time (a process that is in it’s infancy at my hospital). If we are interested in redesign, these post event audits can form the dataset to change mindsets (Hans Rosling phrase – “Let my dataset change your mindset.”  http://www.youtube.com/watch?v=KVhWqwnZ1eM).

Another area even more convoluted is Medical Emergency Team data. Most of you who have been involved with the establishment or maintenance of MET within your hospital would know that the detractors often cite the deficits of the MERIT study (which by it’s authors’ own admissions was a flawed methodology – hindsight and future research provided enlightenment). The reason I comment on MET is that initial thoughts were along the lines of MET team = decrease in cardiac arrests; Early warning and recognition system (EWARS) = decrease in METs = further decrease in cardiac arrests… simple logarithm right? Nope. This was an intiail key performance indicator attached to the trial of an EWARS in a ward in my hospital. Interestingly the thin body of literature at the time suggested the converse effect EWARS = trigger of more METs. With this quandary in place, the ACHS were still battling with a metric of MET activation / 1000 occupied bed days, higher better or lower better indicator?

http://i77.photobucket.com/albums/j50/newsbug/peter-griffin-poster.png

Jones, Bellomo and DeVita (2009) Effectiveness of the Medical Emergency Team: the importance of dose, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2784340/ went some way toward helping develop a more is better philosophy, with indications that a dose-effect curve may be apparent for METs and their reduction of cardiac arrests in hospital. In another layer of complexity when looking at MET performance, crude cardiac arrest data is of little use. What constitutes cardiac arrest? Isn’t that how most people eventually die?  For the purpose of data collection, this is defined at many hospitals by an intervention such as CPR being attended. Our hospital deals with over 30 per cent of  METs that require interventions for appropriate end of life care (consistent with literature around this topic).  So again, post cardiac arrest audit to sort out “arrests” on a continuum between  shouldn’t have ever been harassed with CPR and could have definitely been prevented, is incredibly pertinent. If we are charging MET teams with the weighty expectation of reduction of cardiac arrests (as defined by resuscitation attempts), it seems only fair that we filter out those poor souls that shouldn’t have had their chest compressed at all.

Food for thought. Please let me know what your hospital does in the challenges around these murky pools of data.

Document this CPD