« This Week in PubHub: Giving/Volunteerism Trends | Main | Weekend Link Roundup (December 3 - 4, 2011) »

Why Measuring Impact Remains an Elusive Goal (cont.)

December 03, 2011

Bertillon_calipersDriven in part by growing social needs and a difficult funding environment, impact assessment has emerged as a potential answer to the problem of inefficient resource allocation in the nonprofit sector. Not everyone sees it that way, of course. Earlier this year, Bill Schmabra, director of the Bradley Center for Philanthropy and Civic Renewal at the Hudson Institute, penned an op-ed for the Chronicle of Philanthropy in which he argued that measurement in philanthropic work was neither new nor an effective way to approach grantmaking.

Schambra's piece inspired Larry McGill, the Foundation Center's vice president of research, to post a rebuttal on the center's TRASI (Tools and Resources for Assessing Social Impact) site, which we reposted to PhilanTopic, where it became the most popular post on the blog in November and generated some terrific comments.

Larry's post also generated some great comments on the TRASI site, including one from John Colburn, vice president for operations at the Ford Foundation (and a Foundation Center board member). With John's permission, we've reprinted his comment in its entirety below:

Thanks to both Larry and Brad [Smith, president of the Foundation Center] for starting a conversation on this and to William Schambra for getting the ball rolling in the first place.

First, a general thought: There is a sense in Schambra's piece that because impact assessment is hard and has yielded such limited results, we should simply give up on it. Yet, if one studies the emergence of other disciplines, it is clear that the advancement of knowledge and understanding occurs because a handful of practitioners persevered against the broader culture of practice and what can reasonably be "known" in order to elicit whole new understandings.

Funders and grantees, then, need to "keep at it," but in deference to Schambra, we should try to identify and avoid repeating the same mistakes that have yielded such limited results to date. Here are a few ways we can be smarter....

a) Let's separate accountability and compliance -- important though they are in any funding relationship -- from impact assessment. The conflation of the two results in over-elaborate monitoring tools that distract from the impact-assessment process. I think this has been the challenge for many public sector funders and grantees and Schambra is right to criticize these practices as detracting from, rather than advancing, the achievement of impact.

b) Let's agree that simpler and inexact processes help move the ball on impact assessment. Often, we let the perfect be the enemy of the possible in impact assessment. While the computation of a 5 percent confidence interval has its place in some kinds of impact assessment, simply asking "was this grant worthwhile and did it advance the goals of the funder" can often yield surprising insights. Program officers are hired for and often praised for their horse sense in picking grantees; we should use that same horse sense in assessing the success and impact of a grant. My most meaningful impact assessment activity as a program officer took one morning of work when I reviewed one hundred grants, asked myself which ones significantly exceeded impact expectations, which ones didn't, and what the two groups of grants and grantees had in common. This simple exercise helped me to identify the type of grantmaking more likely to lead to higher success as well as more prone to failure and led to a restructuring of the portfolio and revision in strategy.

c) Let's agree that sharing our results -- successes and failures -- makes us all smarter. This means we need to begin to develop a common framework for describing our work, our goals, and our results. This allows funders to learn from one another and avoid repeating each other's mistakes. The boring and un-glamorous work of coding grants and developing funding taxonomies is an essential building block for developing and sharing knowledge.

d) Let's agree that there is complexity in impact assessment, but not let that stand in the way of seeking universal truths. Yes, context matters. And many grantmaking objectives involve engaging complex ecosystems where attribution of any cause or effect is almost impossible to discern. And, yes, there are bound to be varying levels of quality in formulating and implementing impact assessment. Still, I am convinced that there are underlying commonalities to our work that allow us to learn from one another and begin to build a body of knowledge of what works and what doesn't.

I look forward to continuing the conversation.

And here is Larry's response:

John, thank you very much for your thoughtful response. I would like to underscore what you said about the need to separate accountability from impact assessment, although I might rephrase it to say that we need to separate accountability from "learning." When learning is the goal of assessment, it is done in an entirely different spirit than when it is done for purposes of compliance.

Assessment done in the spirit of accountability or compliance can become obsessively focused on whether specific measurable target outcomes have been achieved, as if meeting those particular outcomes were the only way of determining whether an intervention worked or not. This leads to a "success vs. failure" mentality, a reductionistic view of how progress ought to be measured, and creates a wholly counterproductive pressure to game the system.

It also places an inordinate amount of faith in our ability to rigorously specify theories of change. The measures we choose as indicators of success are based entirely on our theories about how change is supposed to happen. I'm not convinced we know enough about how change happens in complex social settings (or how to measure it) to be able to place a confident bet that a particular intervention will lead inevitably to achieving some predetermined, measurable outcome.

In other words, I think we are still very much (and need to remain) in a learning mode when it comes to figuring out what works. The wisest use of our assessment dollars, in my opinion, would be to make sure we don't lose the opportunity to learn whatever it is that a particular type of intervention has to teach us about how change happens.

I do acknowledge, along with John, that accountability and compliance are important. But it seems to me that we skip a step if we jump straight to accountability without first passing through "learning." A mantra I'd like to see the field adopt is - "Learning before accountability."

What do you think? Should impact assessment be separated from questions of accountability (in Colburn's formulation) or "learning" (in McGill's)? Are we putting too much faith in our ability "to rigorously specify theories of change"? Or is measurement in philanthropic work, as Schambra suggests, a luxury we cannot afford?

« Previous post    Next post »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Quote of the Week

  • "[L]et me assert my firm belief that the only thing we have to fear is...fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance...."

    — Franklin D. Roosevelt, 32nd president of the United States

Subscribe to PhilanTopic


Guest Contributors

  • Laura Cronin
  • Derrick Feldmann
  • Thaler Pekar
  • Kathryn Pyle
  • Nick Scott
  • Allison Shirk

Tweets from @PNDBLOG

Follow us »

Filter posts