Why Measuring Impact Remains an Elusive Goal
November 15, 2011
(Larry McGill is vice president for research at the the Foundation Center. This post originally appeared on the Community page of our Tools and Resources for Assessing Social Impact portal.)
In an op-ed piece in the Chronicle of Philanthropy earlier this year, William Schambra asserted that "measurement is a futile way to approach grantmaking." He further argued that foundations' track record when it comes to outcome and impact measurement has been unimpressive over the years, and that the costs and burdens such measurement places on both foundations and nonprofit organizations heavily outweigh any benefits gained. And he pointed out that strategies for measuring impact keep changing, as unsuccessful methodologies are discarded in favor of new ones which he believes are doomed to fail as well.
I think Schambra makes some good points. And while I'm not as pessimistic as he is, I would add some others as well.
1. Many organizations do not yet grasp the all-important distinction between impact assessment and performance measurement. What foundations and nonprofits can reasonably hope to measure are programmatic outcomes among their direct clients. Social impact, on the other hand, takes place at a collective level that extends far beyond the reach of any one foundation or nonprofit organization. It takes a village to make (and measure) collective impact. (One of the best treatments of this critical distinction between impact assessment and performance measurement is Mark Friedman's Trying Hard Is Not Good Enough.)
2. Impact assessment is not the end goal of foundation and nonprofit work. Making desired change happen is the end goal. We assess "impact" only in order to see whether we have succeeded in making desired change happen and to learn what we need to adjust and improve. Formal impact assessment may not even be necessary in trying to figure out whether change has happened. Sometimes, "you can just tell." And if you can't, maybe impact assessment is trying to measure something too subtle to really matter all that much.
3. How far down the field do you set the goal posts? Over what time period should you measure change? More importantly, what if you've set your goal posts on the wrong field? Maybe change is happening in ways you didn't anticipate.
4. Social investment does not take place in a controlled laboratory setting. Political change is out of your control. Economic change is out of your control. Organization turnover is out of your control. Any of these can derail promising programs and undermine success. How do you assess impact under such conditions?
5. Measurement error creeps in everywhere. Theories of change may be inadequately specified; operationalizing concepts into measurable metrics always involves compromise; data collection is hampered by unclear procedures or insufficiently trained or motivated data collectors. It is entirely possible that the size of any impact to be measured may be smaller than the sum of all these measurement errors!
6. "Measurement" is a highly mediated form of communication (i.e., every measure is only a proxy). Information is always lost when qualitative "reality" is funneled into measurement categories, no matter how carefully defined. To really learn whether something is working or not, there is no substitute, at some level, for direct observation and face-to-face communication. Metrics are filters, and they may be filtering out what you really need to know.
7. Any specific situation is, at some level, irreducibly unique. The best measurement strategy will always be a home-grown methodology that is grounded in an experiential understanding of the present situation. There are over a hundred and fifty measurement tools and strategies in the TRASI database, a number that continues to grow. To what extent can they be applied in situations outside the ones in which they were developed? Are they, at best, only examples to learn from in designing new customized measurement strategies?
We need to have a thorough discussion about the measurement challenges in the field of philanthropy in order to be able to talk meaningfully about the possibility of "social impact assessment." I will be raising these points during a panel session this Friday (11/18) at the upcoming meeting of the Association for Research on Nonprofit Organizations and Voluntary Action (ARNOVA) and invite your thoughts on how we can steer the impact assessment conversation in fruitful directions.
-- Larry McGill