A Clash of Mindsets: Finding a middle way between different impact methodologies. Part 2
by Neil Reeder, Lead Researcher & Director, LSE Cities & Head and Heart Economics
There is, amongst many attempting to grow social enterprises, a backlash as to the perceived ‘gold standard’ of RCTs. For randomized approaches are often seen as expensive and/or producing results that relate to particular contexts (for clients’ conditions, community networks, and alternative forms of support) and so are not transferable. Nancy Cartwright’s 2012 book Evidence-based policy: a practical guide to doing it better provides a formidable list of ways that naïve assumptions of transferability can go wrong.
It’s worth reminding ourselves, however, that gains from good pieces of analysis can be remarkable. For instance, the UK’s Nudge Unit, using findings from behavioural psychology, has shown that tweaks to the content and timing of letters from HM Revenue and Customs deliver large amounts of extra tax payments. More generally, Michael Kremer in an MIT J-PAL paper writes of a hope that: “randomized evaluations might play the same transformative role in social policy during the 21st Century that they played in medicine during the 20th.”
The 2013 LSE/EIB Institute working paper Measuring impact and non-financial return in impact investing: a critical overview of concepts and practice, by Andrea Colantonio and me, suggests that there are at least two different mind-sets on measurement practice.
One group, ‘system builders’, aim to produce a system that is as objective, robust and quantified as possible. They look to expert to expert interactions, designed to build up a body of knowledge, and expert to audience communications designed to disseminate knowledge. This perspective is making advances even in agendas that have been seen as ‘too difficult’. Intangible factors such as clients’ self-confidence, ability to form constructive relationships, and sense that they can at least to some extent be ‘masters of their destiny’ can be measured using various different tools (such as the Outcome Star), and linked in to academic analyses of prospects for future success.
By contrast, a second group, ‘case by case’ practitioners, aim to produce assessment that informs stakeholders of the full social value, with a focus on the ‘here and now’, and a more participative perspective on what counts and how it should be counted. Their perspective has a deep appreciation of the complexities of a given situation, often gained through qualitative rather than quantitative techniques. For instance, much ethnographic work on assessment of offending behaviour suggests an interaction between opportunity, level of fear and hope, peer relations and a changeable sense of identity and core values. This calls for a response tailored to the individual – and yet case by case practitioners would argue that the system building perspective has biased policymakers to look at simplistic, readily repeatable solutions.
Getting the best out of the two different mindsets will not be straight-forward. System builders should do more to learn about external context as much as internal rigour in their methodologies; case by case practitioners can do more to learn more about others’ evidence; both camps can do more to hold to account those that are weak in attribution, in attempting to find out what really did ‘make a difference’. Evidence from a forthcoming short paper co-written by me, setting out interviewee opinions on the state of impact measurement practice, suggests that this will be difficult, but achievable.
A. Ebrahim (2013) Let’s be realistic about measuring impact. Harvard Business Review Blog March 13 2013.
G. Mulgan (2010) Measuring social value. Stanford Social Innovation Review Summer 2010.
N. Reeder & A. Colantonio. (2013) Measuring impact and non-financial returns in impact investing: a critical reviewof concepts and practice. LSE working paper.
Neil Reeder is a researcher on social investment at the London School of Economics and director of Head and Heart Economics. An LSE graduate, his posts have included head of analysis to the Gershon Review of Efficiency, and programme leader on public service innovation at the Young Foundation.
This article was first published in Beyond Measurement: Reflections on SIAA annual conference 2013. This publication provides reflections on the workshops, hotspots and interactive activities that took place at SIAA annual conference on 10th December 2013, and highlights the discussions and the ideas that came out of the day. Click here to view the publication.