« The next crisis: nonprofit leadership exodus | Main | How Social Issues Influenced Voting by Young Americans »

Learning as compromise: a hard look at evaluation in today’s nonprofit sector

November 12, 2020

Girl-writing-at-deskThe past two decades have witnessed a shift in the nonprofit sector with respect to the practice of evaluation, from evaluation as outcome assessment toward evaluation as part of a broader goal of "learning." Perhaps by design, philanthropy has not embraced a single definition of learning, settling instead on a general understanding of learning as any activity designed to foster insights about and responsiveness to stakeholders, thereby leading to program improvement. Despite the ambiguity, the growing importance of the learning paradigm is hard to ignore; from how advisory firms describe their services to revised staff titles, it is clear that evaluators have expanded their understanding of their work beyond the measurement of goal-attainment.

Practitioners like us celebrate the learning paradigm for extending our scope of concern beyond narrow performance metrics and for encouraging ongoing reflection about practice. But while we tend to agree that these are important benefits, we also favor a more critical framing based on years of dissertation research on consulting and the development of evaluation in the nonprofit sector. In our view, evaluators have pivoted to learning not only because it adds value for clients, but also because economic and historical factors have made it professionally advantageous to position evaluation as something more than the measurement of outcomes. Specifically, we highlight two trends: 1) the transformation of evaluation into a routine management function; and 2) the persistent shortage of funding for evaluation. Because of these trends, learning for evaluators themselves is often a compromise between facilitating data-driven insights for clients and managing the considerable barriers to rigorous evaluation of complex social interventions.

Contextualizing the rise of learning

The history of the evaluation profession is fairly well chronicled. The field began as applied social science aimed at assessing the outcomes of large-scale and replicable social interventions. Evaluators focused on the effectiveness of methods and protocols rather than the effectiveness of specific organizations implementing those methods and protocols. Notions of accountability began to change toward the end of the twentieth century with the rise in the social sector of business-oriented performance criteria, leading to more evaluative focus on individual organizations as drivers of social outcomes. Funders began to incorporate evaluation requirements directly into contract terms and grant guidelines, demanding evidence of impact directly from service providers.

As evaluation moved from episodic and large-scale research projects to more routine performance measurement, the demand for evaluation services grew dramatically. This demand fueled a burgeoning and heterogeneous social impact evaluation profession consisting of both in-house evaluation staff and consultants. Many of these professionals conceptualize evaluation quite differently from the traditional social scientific rendering, and some lack the training for comprehensive outcome evaluation. As a result, evaluation has become less about uncovering evidence of causal links between interventions and outcomes and more about giving an organization a scorecard with which to monitor its operations and bolster its case for funding.

Insufficient financial support for rigorous outcome analyses has further fragmented approaches to evaluation. While funders want nonprofits to evaluate outcomes, they commonly fail to provide adequate funds to do such work with conventional rigor. The Center for Effective Philanthropy calls this the paradox of performance assessment, a substantial mismatch between expectations and resourcing for evaluation. Even routine data collection on service volume and client satisfaction can be time-consuming and costly, while the trappings of more comprehensive evaluation designs — psychometrically validated scales, long-term follow-up, the construction of a control group — are out of reach for the vast majority of nonprofits.

Pivoting to learning

In the context of capacity and funding limitations, learning serves as a more flexible form of evaluation practice than traditional outcome evaluation, in that it addresses clients' needs and funders' expectations while remaining feasible within existing constraints. Consider, for example, a hypothetical effort to evaluate a new high school curriculum. A conventional outcomes-focused evaluation might aim to determine whether the curriculum improves academic achievement, while a learning-oriented evaluation would be open to a wider set of practical questions: Did the school have sufficient resources to implement all of the lessons? Did teachers find the curriculum responsive to student needs and abilities? How did students rate the relevance and value of the course material?

While all these considerations are important, pinning down whether and to what extent a curriculum yields academic gains is especially difficult for evaluators without enough funding, time, or (in some cases) training for thorough sampling, extensive statistical analysis, and rigorous causal inference. By comparison, answering learning-oriented questions makes for a more feasible scope of work. Accordingly, evaluation consultants frequently suggest more economical and open-ended methods of impact analysis — collecting stakeholder feedback data, developing theories of change, emphasizing program fidelity assessments — to fit within their core competencies and tight budgets.

Beyond representing an evolution in evaluation practice, then, the spread of a learning paradigm in the nonprofit evaluation world is a reflection of systemically insufficient funding and capacity to conduct extensive and rigorous outcome evaluation. It is, in part, a compromise that smart and dedicated professionals have struck in order to promote data-driven decision-making while managing significant constraints.

Taking stock of learning

The learning paradigm in the nonprofit sector has prompted important conversations and innovations in the evaluation field. It has caused funders and service providers alike to think about effectiveness more broadly and holistically, and to embed reflection in daily practice. It has also challenged evaluators to reflect on the merits of different kinds of questions, evidence, and methodologies.

At the same time, we cannot lose sight of the need for robust outcome analysis. Testing programs for positive outcomes remains indispensable to building best practices, advancing good policy, and improving public well-being. Learning works best when it book-ends and informs outcome analysis, ensuring that the results of evaluation are used, not just cataloged. The broader questions prompted by the learning paradigm should be complements to, not substitutes for, a sector-wide commitment to thorough and rigorous outcome evaluation.

Maoz_Brown_Leah Reisma_philantopicMaoz (Michael) Brown completed a PhD in sociology at the University of Chicago in 2019 with research on the history of social welfare policy in the United States. He regularly provides research-and-evaluation consulting services to funders, social enterprises, and advisory firms.

Leah Reisman received a PhD in sociology from Princeton University in 2020 with research focused on strategy consulting in the nonprofit sector and cultural philanthropy in the United States and Mexico. She works in immigrant serving organizations and as a research consultant to foundations and nonprofits.

« Previous post    Next post »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.

Quote of the Week

  • "[L]et me assert my firm belief that the only thing we have to fear is...fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance...."


    — Franklin D. Roosevelt, 32nd president of the United States

Subscribe to PhilanTopic

Contributors

Guest Contributors

  • Laura Cronin
  • Derrick Feldmann
  • Thaler Pekar
  • Kathryn Pyle
  • Nick Scott
  • Allison Shirk

Tweets from @PNDBLOG

Follow us »

Filter posts

Select
Select
Select