« Funding for Capacity Building: 5Qs for Karen Brown, Fairfield County Community Foundation | Main | A Tale of Two Social Movements: The Giving Pledge and Occupy Wall Street »

Why Measuring Impact Remains an Elusive Goal

November 15, 2011

(Larry McGill is vice president for research at the the Foundation Center. This post originally appeared on the Community page of our Tools and Resources for Assessing Social Impact portal.)

Apples-and-orangesWhere do you fall on the spectrum between skeptic and champion when it comes to assessing the impact of foundation work?

In an op-ed piece in the Chronicle of Philanthropy earlier this year, William Schambra asserted that "measurement is a futile way to approach grantmaking." He further argued that foundations' track record when it comes to outcome and impact measurement has been unimpressive over the years, and that the costs and burdens such measurement places on both foundations and nonprofit organizations heavily outweigh any benefits gained. And he pointed out that strategies for measuring impact keep changing, as unsuccessful methodologies are discarded in favor of new ones which he believes are doomed to fail as well.

I think Schambra makes some good points. And while I'm not as pessimistic as he is, I would add some others as well.

1. Many organizations do not yet grasp the all-important distinction between impact assessment and performance measurement. What foundations and nonprofits can reasonably hope to measure are programmatic outcomes among their direct clients. Social impact, on the other hand, takes place at a collective level that extends far beyond the reach of any one foundation or nonprofit organization. It takes a village to make (and measure) collective impact. (One of the best treatments of this critical distinction between impact assessment and performance measurement is Mark Friedman's Trying Hard Is Not Good Enough.)

2. Impact assessment is not the end goal of foundation and nonprofit work. Making desired change happen is the end goal. We assess "impact" only in order to see whether we have succeeded in making desired change happen and to learn what we need to adjust and improve. Formal impact assessment may not even be necessary in trying to figure out whether change has happened. Sometimes, "you can just tell." And if you can't, maybe impact assessment is trying to measure something too subtle to really matter all that much.

3. How far down the field do you set the goal posts? Over what time period should you measure change? More importantly, what if you've set your goal posts on the wrong field? Maybe change is happening in ways you didn't anticipate.

4. Social investment does not take place in a controlled laboratory setting. Political change is out of your control. Economic change is out of your control. Organization turnover is out of your control. Any of these can derail promising programs and undermine success. How do you assess impact under such conditions?

5. Measurement error creeps in everywhere. Theories of change may be inadequately specified; operationalizing concepts into measurable metrics always involves compromise; data collection is hampered by unclear procedures or insufficiently trained or motivated data collectors. It is entirely possible that the size of any impact to be measured may be smaller than the sum of all these measurement errors!

6. "Measurement" is a highly mediated form of communication (i.e., every measure is only a proxy). Information is always lost when qualitative "reality" is funneled into measurement categories, no matter how carefully defined. To really learn whether something is working or not, there is no substitute, at some level, for direct observation and face-to-face communication. Metrics are filters, and they may be filtering out what you really need to know.

7. Any specific situation is, at some level, irreducibly unique. The best measurement strategy will always be a home-grown methodology that is grounded in an experiential understanding of the present situation. There are over a hundred and fifty measurement tools and strategies in the TRASI database, a number that continues to grow. To what extent can they be applied in situations outside the ones in which they were developed? Are they, at best, only examples to learn from in designing new customized measurement strategies?

We need to have a thorough discussion about the measurement challenges in the field of philanthropy in order to be able to talk meaningfully about the possibility of "social impact assessment." I will be raising these points during a panel session this Friday (11/18) at the upcoming meeting of the Association for Research on Nonprofit Organizations and Voluntary Action (ARNOVA) and invite your thoughts on how we can steer the impact assessment conversation in fruitful directions.

-- Larry McGill


« Previous post    Next post »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Posted by Thegoodcounsel  |   November 15, 2011 at 11:16 PM

These are really good points. I see organizations struggling with this all the time. If only there was a cut and dry formula to measure effectiveness and impact.

Posted by Cynthia Gibson  |   November 16, 2011 at 08:15 AM

Hi Larry:

Thanks for this great post. Since you asked for comments, you might check out something about this in Nonprofit Quarterly. "Innovation. IMpact. Enough Talk. More Do." The last part is about getting serious about at least attempting to measure the "impact" everyone keeps talking about. http://www.nonprofitquarterly.org/index.php?option=com_content&view=article&id=17121
Would love your thoughts!

Posted by Bliss_chris  |   November 16, 2011 at 09:40 AM

Fantastic post, couldn't agree more. It's refreshing to find such a succinct argument about the (limited) utility of impact assessment, and I think your approach is balanced in both directions. It's a powerful tool with serious limitations.

Posted by Bradford Smith  |   November 16, 2011 at 10:34 AM

I am hugely biased because Larry and I work together but I really appreciate the approach taken in this blog piece. My favorite line is: "Measurement is a highly mediated form of communication." And the distinction between performance measurement and impact assessment is critical. Bill Schambra has made a career out of overstating his positions in order to provoke needed conversations, but I would never go so far as to say that performance measurement is futile. In my experience the idea that real philanthropy is based solely on passion and none of this measurement mumbo-jumbo is a myth. Philanthropic intuition is based on accumulated experience and proxies that donors use -- sometimes explicitly but often implicitly --in judging which organizations are performing well and making a difference. One thing is for sure, however. Whether one is guided by intuition or scientific measurement, grantmakers are never fully exempted from the leap of faith that is essential to our craft.

Posted by Cynthia Bailie  |   November 16, 2011 at 02:56 PM

I agree that "the best measurement strategy will always be a home-grown methodology that is grounded in an experiential understanding of the present situation." That being said, I worry that organizations have the equivalent of writer's block when trying to get started with that homegrown approach. And, that, for a number of reasons, they are reluctant to share their homegrown assessment tools and learnings. TRASI to the rescue, where organizations can find examples to learn from, share, grow, and use for their own purposes.

I am glad to see Cynthia Gibson weighing in here too, having recently read her Nonprofit Quarterly piece, "Innovation. Impact. Enough Talk. More Do."

On the ground, as they say, sometimes a "just do it" approach is good. How else will we learn what works and what doesn't? What are we waiting for? Debate and analysis shouldn't become an excuse for doing nothing.

Posted by Svtgroup  |   November 17, 2011 at 03:21 PM

This is a great summary of the challenges. Thanks for framing this discussion so well.

Recently I heard someone say that measurement issues often are really issues of finding a buyer for the impact. If you can find a buyer for the impact, measurement of that impact will take care of itself-- meaning, as the buyer and "seller" agree on a "good enough" way to verify that the impact has taken place. I would say that is one of the key issues: having an investor or a buyer who values the impact in question.

As I see it the other system conditions that contribute to making it valuable enough for organizations to crack the impact measurement nut are Cost, Know-how, Technology, and Visibility.

As the cost of getting the information goes down which is a function of technology and human resources, and as each organization's impact becomes more visible to the public, it becomes more valuable to bother measuring impact because there's more of a pressure from the public to do well and compare well to other groups and more of a reward for doing well.

One cost issue is the fragmentation of knowledge, a problem which is just barely beginning to be tackled thanks to databases of metrics like SROI Network's VOIS, and IRIS. VOIS is a database of impact metrics and explanations of how they were arrived at/how to practically go about collecting the needed data that will gradually over time build some collective intelligence about the nuances of measuring significant and important changes for stakeholders within the varied contexts philanthropy works in. These datasets will gradually also make it easier and cheaper to measure impact, while narrowing the band within which judgment calls must be made (and they will always need to be made, just as they are still regularly made in financial accounting).

Another part of the problem is the lack of human resources trained in the skill of impact management- ie, tracking what's going on in a manner that is designed to inform the ongoing strategy and practical decisionmaking of the organizations making the work happen (as opposed to academics who are trying to ascertain with certainty what change resulted from intervention X, or evaluators who are hired to check a governance box stating that an evaluation was in fact done). This too is gradually changing. Many business schools are beginning to offer at least a lecture if not a whole course on measuring social and environmental impact, a number of consultancies offer practitioner workshops on the topic, and the SROI Network has built the skills of over 800 people and counting worldwide in its methodology.

I would add that whether philanthropy finds it worthwhile to measure social and environmental impact, increasingly business is finding it worthwhile to do so.... Sooner or later this phenomenon will create pressure on nonprofits and foundations to take up the issue again, because ultimately I'll wager the public will say it's simply not good enough for nonprofits and foundations to claim "it's too hard."

Sara Olsen

Posted by Claire Rosenzweig, CEO BBB of Metro NY  |   December 01, 2011 at 05:33 PM

As an added resource in terms of examining impact..... take a look at www.chartingimpact.org. It is a website that asks questions to help an organization in that way. It was developed through a strategic alliance between Independent Sector, Guidestar and the BBB.

Posted by Josh Joseph  |   January 05, 2012 at 11:55 AM


Sorry to come late to your good post, which will surely be as relevant this year as last! Schambra seems to make at least two assumptions about measurement that offer entry points for many of the follow-ups you pose. He discusses measurement mainly as a tool for reporting rather than learning and as a way to assess end results (impact) rather than program progress.

When program successes and impacts occur, these usually happens in stages rather than all at once. This is especially true for complex problems - the ones nonprofits tend to take on. So it can help to use measurement as a way to get regular feedback, to learn what's working and to inform program decisions. Done right, this approach has real relevance for program teams. By contrast, results from assessments done at the end of a grant often come too late to matter, as Schambra notes. This sets up an all-or-nothing scenario for grantees, participants & funders: programs either work or they fail. Not an ideal way to leverage investments or seed future successes.

Your questions about where to set the goal posts, how to measure unanticipated change and how to deal with conditions outside a program’s control seem to support the need for regular feedback. Goalposts may be moved if compelling feedback about programs comes early enough. Identifying unanticipated effects early on (positive or negative) may give program teams new insights that they can use to tweak impact models in real time. And even if an intervention is overtaken by external events, stakeholder input may still help grantees to better understand, explain and document why a program isn’t able to deliver on its promise.

Also agree with your good points on the limits of metrics to shed light on why something is working or not. Measures tend to tell us what’s happening, but not why. Periodic conversations with program participants and staff can help fill in gaps about stakeholder experiences in a program and the potential effects of other contextual/environmental factors.

Where might this lead? At the end of his post, Schambra argues for a simple and standardized approach to evaluation and reporting. A starting point for grantees and funders might be greater clarity on what program progress looks like – both in terms of impact models and in specifying meaningful interim measures. It’s a missing link in many proposals.

Interim reports could then better address progress toward mid-range goals, emphasize lessons learned in mid-course and identify program changes, as appropriate. I think this would help reinforce the key role of measurement in building knowledge and driving program decisions, not just in reporting results.

The comments to this entry are closed.

Quote of the Week

  • "[L]et me assert my firm belief that the only thing we have to fear is...fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance...."

    — Franklin D. Roosevelt, 32nd president of the United States

Subscribe to PhilanTopic


Guest Contributors

  • Laura Cronin
  • Derrick Feldmann
  • Thaler Pekar
  • Kathryn Pyle
  • Nick Scott
  • Allison Shirk

Tweets from @PNDBLOG

Follow us »

Filter posts