Why Evaluations Are Worth Reading – or Not
October 26, 2017
Truth in lending statement: I am an evaluator. I believe strongly in the power of excellent evaluations to inform, guide, support, and assess programs, strategies, initiatives, organizations, and movements. I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness, and their impact based on evaluation data; helped to design and implement evaluation initiatives here at the McCormick Foundation that changed the way we understand and do our work; and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for greater understanding and improvement.
One of the best examples I've seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project. Our foundation had funded almost thirty organizations that were using thirty-seven tools to measure the impact of treatment. Many of those tools were culturally inappropriate, designed for initial screenings, or inappropriate for other reasons, and staff from organizations running similar programs had conflicting views about them. Program staff here wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored grantees' knowledge and experience. A new evaluation initiative was funded that included the development of a "community of practice" to:
- create a unified set of reporting tools;
- learn from the data how to improve program design and implementation, and use data systematically to support staff/program effectiveness;
- develop a new rubric that the foundation could use to assess programs and proposals; and
- provide evaluation coaching for all organizations participating in the initiative.
The initiative was so successful that the participating nonprofits decided to continue to work together beyond the initial scope of the project to improve their own programs and better support the children and families they serve. This "Unified Project Outcomes" article describes the project and the processes that were established as a result in far greater detail.
But I have also seen and been a part of evaluations where:
- the methodology was flawed or weak;
- the input data were inaccurate and full of gaps;
- there was limited understanding of the context in which the organization worked;
- there was no input from relevant participants; and
- there was no thought to the use of the data/analysis.
Unsurprisingly, little or no value came out of them, and the learning that took place as a result was inconsequential.
What about evaluation reports that come at the end of a project or initiative? Except for a program officer who has to report to her director about how a contract or foundation strategy was implemented, the changes that occurred, and the value or impact of an investment or initiative, should anyone bother reading them? From my perch, the answer is a big "Maybe." Given all the other things stacked on my desk that I need to read, what does it take for an evaluation report to be worth my time? A lot.
1. It has to be an evaluation and not a PR piece. Too often, "evaluation" reports provide a cleaned-up version of what really happened, with none of the information about how and why an initiative or organization functioned as it did and data that only underscores its success. This is not to say that initiatives/organizations can't be successful. That's what we all want. But no project or program unfolds perfectly, and if I don't see critical concerns/problems/caveats identified, I tend to assume that I'm not getting the whole story and the report's value to me drops precipitously.
2. It has to provide relevant context. To read an evaluation of a multi-organizational collaboration in Illinois without placing its fiscal challenges within the context of the state's ongoing budget crisis, or to read about a university-sponsored community-based educational program without knowing the long history of mistrust between the school and the community, or any other of the relevant and critical contextual pieces that can effect a program, initiative, or organization renders that particular evaluation of little value. Situating an evaluation within a particular context or unique set of circumstances significantly improves the possibility that the knowledge is transferable to other settings.
3. It has to be clear and detailed about the populations being served. Too often, I read evaluations that leave out critical information about who was targeted, participated, or served.
4. The evaluation's methodology must be described with sufficient detail so I have confidence that its design and implementation, as well as the analysis of the data, were skillful and appropriate. I also pay great attention to what extent those who were the focus of the evaluation participated in the evaluation's design and the questions being addressed.
5. And finally, if I am going to read it, the evaluation has to be something I can easily find. If it exists in a repository like IssueLab, my chances of finding it increase significantly. After all, if it's good, it's even better that it is #OpenForGood for others, like me, to learn from.
When the above conditions are met, the answer to the question, "Are evaluations worth reading?" is an unequivocal "YES!"
Rebekah Levin is the director of evaluation and learning for the Chicago-based Robert R. McCormick Foundation, in which role she guides the foundation's efforts to evaluate the impact of its grantmaking and involvement in community issues. This post originally appeared as part of Glasspockets' #OpenForGood series, which explores new tools, promising practices, and inspiring examples of foundations that are opening up the knowledge they acquire for the benefit of the larger philanthropic sector and is presented in partnership with the Fund for Shared Insight.
Comments