Measuring Impact in a Complex Organisation
Eleri Morgan Thomas – Director Service Impact, Mission Australia
Imagine you are a new Director or trustee on the Board of a very large charity and passionate about wanting to make a difference. You were attracted to this charity in the first place because you liked its mission of transformational change – a hand-up, not a hand-out.
But you might be sitting there thinking? Do I really know whether we are achieving the mission? Are we investing in the right programmes? Are we really changing lives? What reports should I be requesting from management that can help me understand?
That’s the challenge confronting Directors and trustees of most charities and perhaps more so for large organisations providing multiple programmes. There is often solid financial information but the indicators of service effectiveness just aren’t there.
Most of the traditional indicators of success available to the for-profit world don’t work for not-for-profits. Market share – not helpful when your customer (the funder) is different to your client and demand for services far outstrips capacity to deliver. Profit and return on investment – can often be derived from the financials, but don’t tell you about achievement of mission.
I’m in the newly created role of Director of Service Impact at Mission Australia, one of Australia’s largest charities. We are one of a handful of charities that are truly national and work in every State and Territory of this vast country. In 2010-11 our 550 community and employment services assisted more than 300,000 people by providing a hand up, a way forward and hope for the future.
We work towards creating a fairer Australia by advocating for people in need and helping them to get back on their feet. We strengthen families, empower youth, strive to solve homelessness and provide employment solutions. Recently we have fostered the growth of two new service arms in early learning /childcare and social housing.
Led by our Chairman and the CEO, we have just embarked on a new initiative to measure and report on impact across the whole organisation.
We have some of the building blocks already in place. For instance, we have mapped the outcomes and impacts we expect to achieve in four out of five ‘pathways’.
And at an individual programme level we have done some great evaluations. However we still have a long way to go before we can measure and report on impact. Not least because we currently collect data to meet our various internal and external reporting requirements and most of that is on inputs and outputs. For all the talk of outcome measurement in Australia, most funders particularly government can’t see beyond outputs and don’t want to fund outcome measurement.
We have four main client data sets that come from our service delivery. I am about to embark on a process to find out whether they use the same data definitions; what sort of data is collected; whether it is an input, output or outcome; and which data fields are mandatory and optional. I want to find the commonalities and the differences.
It is important we are realistic about what we can collect and report on now and in the future and to be clear on why we are reporting and to whom. Unless we are crystal clear about the purpose of data and reporting, it will be a wasted exercise. The rigour of an evaluation exercise that will justify scaling up a program or getting government to invest is at a higher level to the performance information required to make adjustments to an existing program to increase the effectiveness.
The Michael Project is a good example of a rigorous evaluation. It was funded by a very generous philanthropist who wanted to test a new model of ending men’s homelessness. The resulting evaluation showed what worked – and what didn’t – and a standard cost benefit analysis resulted in demonstrable net savings of 3,600AUD per year from reduced hospital and justice costs.
While I was writing this, I was fortunate enough to come across an interview by Bridgespan’s Matthew Forti with Jodi Nelson Head of Measurement and Evaluation for the Bill and Melinda Gates Foundation. They face similar issues to MissionAustralia of size, diversity, complexity and different uses.
Jodi Nelson says: “The biggest learning for me is that my job is more about organizational change than it is about being an evaluation expert.” This resonates with me a great deal.
To truly measure impact we need data and we need each of our service delivery divisions to collect that data. We want them to use it to improve their service delivery, to make decisions about what to scale up and what we need less. We also want them to report to the Board so that the Directors can understand better what we do and where resources should be directed.
But that won’t happen without quite significant change in the organisation. And I’m just recognising how big that challenge will be.
Fortunately, this is something MissionAustralia’s Board of Directors, the CEO and I are all passionate about. From everything I read leadership from the very top of the organisation must drive that change.