Which Comes First – the Indicator or the Tool?
by Dawn Wood, Action Planning
Here’s a conundrum I’ve been coming across recently, fuelled by rising awareness of off-the-shelf monitoring tools. It is all about setting indicators and identifying data gathering tools – and the occasions when these two merge!
After running courses on Demonstrating Outcomes for several years, my impression is that people are now much more comfortable with the concept of outcomes versus outputs than they were a few years ago. However the interpretation and usage of indicators is still quite variable.
My approach has always been that once the outcomes are articulated, then clients need to consider what indicators will tell them whether they are being achieved. After that we look for tools that can collect the data. For example, if the outcome is reduced social isolation, then there are a range of indicators that might be appropriate, from whether they have someone to talk to about day to day problems, to the number of occasions people attend community groups and activities in a month.
Once we know what indicator we want to measure, we can select the most appropriate tool that will help us to gather the data in ways which provide both quantitative information as well as qualitative insights – in this case around participation and enjoyment of the community group attended – not just physically being present!
However, there are many off-the-shelf-tools where others have already done the work for us. They come with a ready-made set of indicators, which can save clients time and trouble producing their own. So I’m finding that some clients are fast forwarding the process – picking a psychological scale for loneliness, or adopting one of the outcomes stars and getting started, with less concern for the specific indicators involved.
So, my question is, does it matter?
I can see that off-the-shelf tools are a really useful resource of validated research methods, which will enhance the quality and authority of the evidence presented. They also offer the potential for more benchmarking. They are a valuable option.
On the other hand, they provide a most tempting short cut to express indicators solely in terms of the tool – such as the number of people increasing their scores on a pre-defined scale, or moving forward two stages on an outcomes star ladder.
I think there is a danger that this will narrow our thinking – and our horizons, if we simply focus organisations on achieving and reporting progress as defined by a tool.
Some tools may be measuring indicators that the client is not specifically focused on addressing through their work. The tools generally focus on soft outcomes and self assessment, so that clients may overlook the benefit of setting indicators around behaviour change (and third party reporting) resulting from soft outcomes. There are important conversations to be had here – which might be rushed over in a “quick fix” approach. But time is money and talking rather doing is a luxury.
So, where are you on this? I’m still looking at ways of building appropriate validated tools into my clients’ monitoring frameworks at the right stage, rather than constructing the framework around them. I feel that, as with most things in life, an over-dependency on one approach, however good, is never as healthy as being part of a mixed toolkit.
Dawn leads on impact consultancy for Action Planning; a full service consultancy, working to support not-for-profit organisations to improve their organisational effectiveness.