Why WITNESS and Other Nonprofits Are Adopting the Serious Business of Monitoring and Evaluation
August 28, 2014
Last month, The New York Times "reviewed" the still-in-development Participant Media Index, which is designed to measure the impact and engagement of social issue documentaries. Anyone in the nonprofit world knows that impact and engagement are the buzzwords du jour. More than a passing fad, however, impact evaluation is serious business – one that many of us in the social change realm grapple with every day.
This has not always been the case in the eighteen years I've worked in the sector. Funders have increasingly driven the trend, asking grantees to not just monitor our progress, but also to develop innovative ways to quantify that progress and share our learnings more broadly. In this way, the nonprofit world is catching up with the fields of medicine, psychology and education – all of which have embraced "evidence-based practice" over the past two decades.
This is mostly a positive development. By laying out concrete objectives and outcomes at the start of a grant (in the proposal), organizations are forced to think more strategically and are held accountable for delivering on their promises. The most forward-thinking funders understand the risk inherent in our work – that social investments, like those in business, are not guaranteed to succeed, and that organizations can learn as much from their failures as their achievements. Yet careful planning (yes, even the ubiquitous logic framework) can help increase the odds that we uphold our end of the bargain: To ensure that precious resources are used to successfully mobilize positive social change.
WITNESS has always been considered an innovator in impact evaluation, starting in the mid-2000s with our groundbreaking Performance Evaluation Dashboard, and including a massive effort we launched recently to overhaul our program. Indeed, we are constantly looking for new ways to ensure we maximize our performance and learnings. But this approach is not without its challenges. Human rights advocacy is notoriously difficult to measure, change is often incremental, and ultimate "wins" can take years to achieve. Video advocacy is even more complex, since video is a complementary tool, intended to corroborate other, more traditional forms of documentation.

(A point system for tracking Outputs, Outcomes and Impacts from WITNESS’ first Performance Dashboard covering our fiscal year 2006.)
This leveraging of human capital and the resulting changes in behavior, policies, and practices do not lend themselves to easy (or, necessarily, short-term) measurement. Few organizations have the resources to invest in longitudinal follow-up studies that might go on for years. Furthermore, in WITNESS' case, when clear impact results are achieved, it is simply not in our DNA to claim sole ownership over what is always a collaborative and shared process.
At the same time, it's becoming clear that a rapidly changing technology landscape requires human rights organizations to adapt and become more nimble. We would argue that funders also need to adapt their approaches. Many are still asking for outdated ways of measuring impact – with an overemphasis on numbers, for instance, or in holding grantees accountable to rigid and predictive multiyear outcomes (a potential pitfall of logic frameworks). A recent article in the Stanford Social Innovation Review on "emergent strategy" discusses the trend toward balancing clear goal-setting with latitude for iteration and responsiveness along the way – particularly within complex social change work like human rights. Among other things, the article notes that:
To solve today's complex social problems, foundations need to shift from the prevailing model of strategic philanthropy that attempts to predict outcomes to an emergent model that better fits the realities of creating social change in a complex world....
A fascinating blog post by Iain Levine of Human Rights Watch captures the essence of this new perspective, including examples (from Syria and elsewhere) where success is elusive yet the moral imperative of human rights engagement ultimately overrides outcomes.
Given this new landscape, as well as the new thinking around how to best evaluate social change work, what are we doing differently these days at WITNESS? For starters, we have begun to build new evaluation mechanisms into our workflow, constantly collecting both quantitative and anecdotal data from the field. This post on the reach of our Forced Evictions Toolkit is one example of the ways we are aggregating and interpreting information.
We are also capturing the stories that make video such a powerful tool for change. In Syria, for instance, we provided Waseem, a 25-year-old student from Homs, with video equipment and a series of in-depth trainings online that helped him become a seasoned citizen-journalist. Today, Waseem's videos are featured on major media outlets that are covering the crisis in Syria, and he teaches other local activists the skills he learned from WITNESS.
Since we cannot possibly track the ongoing impacts of every person who attends a training or downloads our resources online, WITNESS has also begun to work with external consultants on "deep dives" into selected programs that serve as emblematic examples of our approach. We recently completed an analysis of our three-year focus on Forced Evictions (available upon request), which was undertaken with the Adessium Foundation. Findings like these are intended to feed back into the adaptation of that particular program – always in collaboration with our grassroots partners so we can (in emergent strategy-speak) "co-evolve" our collective approach. Given the cross-pollination among all WITNESS programs, such findings are also intended to strengthen our other core initiatives and, we hope, those of our peers.
We're happy that the topic of monitoring and evaluation is getting serious consideration from mainstream media, as it is something social change organizations often toil on behind the scenes. The more we can develop a common understanding – and common standards – across our sector, the more effective, efficient, and impactful we can all become.
Sara Federlein is the Associate Director – Foundations at WITNESS, where she has overseen a fourfold increase in institutional giving since she joined the organization in 2003. This post originally appeared on the WITNESS blog.
Posted by susan abbott | August 28, 2014 at 04:01 PM
Great article and overview of the discussion of M&E for media and communications related programs. I've wondered about whether the Participant Index Approach really differs so much from classic advertising and marketing approaches. Do you think that the Knowledge, Attitude, Behavior approach is the best way to measure media and communications programs? This is a question that I've thought a lot about and would appreciate your ideas and thoughts related to it. Thanks!