26 posts categorized "Evaluation"

How to Keep Me Scrolling Through What You Are Sharing

November 02, 2017

Hello, my name is Tom and I am a Subscriber. And a Tweeter, a Follower, a Forwarder (FYI!), a Google Searcher, and a DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word "knowledge," so I feel compelled to make sure I'm keeping track of the high volume of data, information, reports, and ideas flowing through the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I'm not even including my favorite travel, shopping, and coupon alerts).

It's a lot, and I confess I don't read all of it. It's a form of meditation, I guess, for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet, I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who's reading it all? And how do we make what we're opening up for good actually good?

Making Knowledge Usable

At some point, we've all battled Drowning in Information-Starving for Knowledge syndrome (from John Naisbitt's Megatrends — though I prefer E.O. Wilson's "starving for wisdom" theory). The information may be out there, but it rarely exists in a form that is easily found, read, understood, and (most importantly) usedFoundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But to really succeed, nonprofits and foundations will have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and easy to apply.

1-2-3-reporting-model

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – "Too Long, Didn't Read."

Evergreen proposes that every published report be available in three formats — a one-page handout with key messages, a three-page executive summary, and a 25-page report (plus appendices). That way,  "scanners," "skimmers," and "deep divers" can access the information in the form they prefer and in the time that's available to them. Such an approach also requires writing (and formatting) differently for each of these different audiences. (By the way, do you know which one you are?)

From Information to Influence

But it isn't enough to make your reports accessible, searchable, and easily readable in both a short and long form; you also have to include the information people need to make decisions and take action. It means deciding in advance who you hope to inform and influence and what you want them do with that information. If you expect people to read, learn from, and apply the information you're sharing, you need to be clear about your reason for sharing it, and you need to give people the right kind of information.

Too many times I've read reports that include promising findings and interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details or implementation guidance. Alas, I usually wind up trying to track down the authors by email or phone.

2005 study of more than one thousand evaluations focused on human services found only twenty-two that shared any analysis of implementation learnings — i.e., the lessons people learned about how best to put the program or services in place. We can't expect other people and organizations to share your knowledge and what you've learned if they cannot access information that helps them use that knowledge and apply it to their own programs and organizations. YES, I want to hear about your lessons and "a-ha" moments, but I also want to see data and an analysis of the common challenges faced by all nonprofits and foundations:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to additional people and communities

This means making sure your evaluations and reports include a frank discussion of the challenges related to implementation — challenges that others are likely to face. It also means placing your findings in the context of existing knowledge and learnings and using commonly accepted definitions that make it easier to build on the knowledge created by others. For example, in our recent middle school connectedness initiative, our evaluator, Learning for Action, reviewed the literature first to identify the specific components of and best practices in youth mentoring, thus enabling us to build the evaluation on what had been done in the field by others, report clearly about what we learned about our own initiative, and share that knowledge with the field. 

So please plan ahead and define your knowledge sharing and influence agenda up front, and as you're doing so keep the following guidelines in mind:

  • Who do you hope reads your report?
  • What information should it share in order to be useful and used?
  • Review similar studies and reports and determine in advance what additional knowledge you'll need to share, as well as what you plan to document and evaluate.
  • Use common definitions and program model frameworks so that others are able to build on the accumulated knowledge of the field and not have to start from scratch each and every time.
  • Pay attention to the implementation, replication, and management challenges (staffing, training, communication, adaptation) that others are likely to face.
  • Disseminate your evaluation widely via conferences, in journals, through your networks, and in IssueLab's open repository.

And if you do all of the above, I will be happy to read through your report's footnotes and appendices the next time I'm waiting in line for a salad!

Headshot_tom_kellyTom Kelly (@TomEval, TomEval.com) is vice president of knowledge, evaluation and learning at the Hawai‘i Community Foundation and has been learning and evaluating as a practitioner since the beginning of the century.  This post originally appeared as part of Glasspockets' #OpenForGood series, which explores new tools, promising practices, and inspiring examples of foundations that are opening up the knowledge they acquire for the benefit of the larger philanthropic sector and is presented in partnership with the Fund for Shared Insight.

Why Evaluations Are Worth Reading – or Not

October 26, 2017

EvaluationTruth in lending statement: I am an evaluator. I believe strongly in the power of excellent evaluations to inform, guide, support, and assess programs, strategies, initiatives, organizations, and movements. I have directed programs that were redesigned to increase their effectiveness, their cultural appropriateness, and their impact based on evaluation data; helped to design and implement evaluation initiatives here at the McCormick Foundation that changed the way we understand and do our work; and have worked with many foundation colleagues and nonprofits to find ways to make evaluation serve their needs for greater understanding and improvement.

One of the best examples I've seen of excellent evaluation within philanthropy came with a child abuse prevention and treatment project. Our foundation had funded almost thirty organizations that were using thirty-seven tools to measure the impact of treatment. Many of those tools were culturally inappropriate, designed for initial screenings, or inappropriate for other reasons, and staff from organizations running similar programs had conflicting views about them. Program staff here wanted to be able to compare program outcomes using uniform evaluation tools and to use that data to make funding, policy, and program recommendations, but they were at a loss as to how to do so in a way that honored grantees' knowledge and experience. A new evaluation initiative was funded that included the development of a "community of practice" to:

  • create a unified set of reporting tools;
  • learn from the data how to improve program design and implementation, and use data systematically to support staff/program effectiveness;
  • develop a new rubric that the foundation could use to assess programs and proposals; and
  • provide evaluation coaching for all organizations participating in the initiative.

The initiative was so successful that the participating nonprofits decided to continue to work together beyond the initial scope of the project to improve their own programs and better support the children and families they serve. This "Unified Project Outcomes" article describes the project and the processes that were established as a result in far greater detail.

Continue reading »

Making Sense: Reflecting on Evaluations at the Jim Joseph Foundation

August 23, 2017

QuestionsanswerssignA core part of the Jim Joseph Foundation's relational approach to grantmaking is supporting the efforts of grantees to evaluate their programs — either through engaging an external evaluator or by collecting and analyzing data internally. The foundation has always believed this is a key part of good grantmaking, in that it builds the capacity of organizations to ask questions, to collect data, and to reflect on findings in a way that then enables them to make changes that increase the chances of success.

In this period of transition at the foundation, the grantmaking team has asked some pertinent questions regarding our own evaluation program: "What are we learning from the evaluation work we have supported over the past eleven years?” And, "Are there common lessons and emerging themes that we should recognize and reflect upon?"

To begin exploring these and other questions, the entire foundation team gathered for a full day earlier this year to share and discuss learnings and common themes discovered from a comprehensive review of nearly all the key evaluations and reports commissioned by the foundation since its inception.

To make the day as productive as possible, the foundation grantmaking team completed "homework" in the weeks leading up to the day-long session, dividing up the responsibility for reviewing a sample of forty-two evaluation reports, capacity-building and business plans, and field-building research reports — all commissioned and completed in the foundation's first eleven years — among team members and asking them to summarize the challenges, outcomes, and successes they identified in their respective documents.

This "day of evaluation reflection" (as we called it) turned out to be well worth the collective time and effort and, importantly, offered space for the team to discuss how the information and lessons that surfaced in our conversations might guide our future work. The summary below includes highlights from those discussions.

The foundation's effect on Jewish life and learning

How has the Jim Joseph Foundation influenced Judaism and Jewish peoples' approach to Jewish life and learning? This overarching question speaks directly to the foundation's mission. A common theme across many of the grants we have funded and evaluated is fostering community and positive relationships within the Jewish community. With few exceptions, evaluations show that participants in foundation-supported programs report feeling more connected to their Jewish identity and to Israel when those are the intended outcomes of the program. Since the DNA of the foundation includes a broad interpretation of and approach to Jewish learning, these programs encompass every kind of setting and activity, from camps, to schools, to service experiences, to Jewish outdoor food and environmental education. And, almost without exception, they have all proved to be effective while remaining aligned with our mission and values.

Lessons learned that have potential to inform foundation grantmaking

Several key themes emerged from the day's discussions that highlight opportunities for reflection, focus, and improvement:

  • Young adult communities can be brought together successfully through different interests and avenues that resonate and are relevant to the lives of young adults. Social justice and service increasingly are reasons for young Jews to engage in Jewish life.  And follow-on programming after an immersive experience is critical to deepening programmatic impact, creating community, and achieving positive outcomes.
  • Successful programs vary in cost and scale, and while immersive programs can be expensive and reach a relatively small number of people, they also tend to have a deep and lasting effect on participants. Other programs, such as doctoral programs in Jewish studies or education, are a longer play, with a relatively high cost per student or participant.
  • Mentorship and time for reflection are key elements in the success of many programs, particularly those focused on educator training. In addition, students value a reputable university program and also desire flexibility and diversity in their program options.
  • Capacity building with respect to evaluation, development, and growth planning can be important investments for grantees. As a relational grantmaker, the Jim Joseph Foundation is in a position to help an organization pivot and/or engage in long-term strategic planning. These plans must be right-sized, however, with realistic revenue targets and investments.
  • Relationships among organizations and people matter. There is value in collaboration and strength in building networks; both also are integral components of successful culture-change initiatives.
  • Some grants are designed to leave a system in place so as to create impact long after the grant period ends. Admittedly, this is an ideal scenario, but local and national funding partners with aligned interests can leverage their resources to both widen and deepen the impact of their grant dollars.

Challenges grantees often encounter

The day also brought to the fore some of the common challenges grantee partners experience.

  • The majority of challenges experienced by the foundation's grantees were related to marketing, recruitment, and retention. Retaining current participants can be just as valuable as bringing in new participants to a program/initiative. Another common challenge relates to hiring and retaining the right personnel — at all levels.
  • Fundraising for sustainability and growth frequently is a challenge — and many effective programs end up being not "sexy" enough for donors.
  • Whole school and/or organizational culture change is an effective way to create impact, but it often involves a lengthy process that requires significant staff capacity and buy-in.

Reflections on evaluation

In discussions about our evaluation support moving forward, the team discussed the importance of elevating the following concepts:

  • Asking good questions and being data informed in our decision-making. Related: evaluations help tell a story for newer foundation staff members about what is working and what is not.
  • It's important to create opportunities for funding to follow what is working — and evaluations can help inform both the "if" and "how" with respect to scaling a pilot program.
  • We should "celebrate failure" in appropriate ways and for the purposes of learning. It's also important to acknowledge that some "failures" actually turned into partial successes years after the grant and evaluation periods had ended. In other words, sometimes an evaluation simply captures a moment in time that may not be representative of the true impact of the program.
  • Field-building research reports frequently raise the profile of certain programs and certain issues — and dissemination is a very important part of the process.
  • Assessing return-on-investment from a grant or series of grants is a daunting challenge. Numbers (e.g., program participants) do not tell the entire story about the long-term effects or how someone's experience influenced their worldview and connection to their faith and community. As a result of its experience, the team reaffirmed our commitment to understand more deeply how Jewish life and learning is experienced and fostered.

Our team viewed the Day of Evaluation Reflection as a productive, enjoyable time for learning. And staff expressed positive sentiments toward the day itself in terms of the structure, presentations, and team-building environment — as well as the preparation process outlined in advance. The conversations we had were open and honest, and signaled that the current grantmaking team is willing to critically examine the foundation's past, current, and future work in a manner that emphasizes transparency, trust, and patience.

The exercise also raised a number of interesting and important questions that we will continue to explore. As is our tradition, we will continue to ask new questions and encourage dialogue as a means to advance our work and deepen our understanding of the most effective ways to practice and evaluate philanthropy.

Headshot_stacie_cherner_156x200Stacie Cherner is senior program officer at the Jim Joseph Foundation.

Because What You Know Shouldn't Just Be About Who You Know

July 11, 2017

The following post is part of a year-long series here on PhilanTopic that addresses major themes related to the center's work: the use of data to understand and address important issues and challenges; the benefits of foundation transparency for donors, nonprofits/NGOs, and the broader public; the emergence of private philanthropy globally; the role of storytelling in conveying the critical work of philanthropy; and what it means, and looks like, to be an effective, high-functioning foundation, nonprofit, or changemaker in the twenty-first century. As always, we welcome your thoughts and feedback.

_____

"Knowledge is obsolete." As a librarian, my ears perked up when someone shared the title of this TEDxFoggyBottom talk. It's plausible. Why memorize obscure, hard-to-remember facts when anything you could possibly want to know can be looked up, on the go, via a smartphone? As a mom, I imagine my kids sitting down to prepare for rich, thought-provoking classroom discussions instead of laboring over endless multiple-choice tests. What an exciting time to be alive — a time when all of humanity's knowledge is at our fingertips, leading experts are just a swipe away, the answer always literally close at hand, and we've been released from the drudgery of memorization and graduated to a life of active, informed debate! And how lucky are we to be working in philanthropy and able to leverage all this knowledge for good, right?

Open-for-good_featureforeground

Though the active debate part may sound familiar, sadly, for too many of us working in philanthropy, the knowledge utopia described above is more sci-fi mirage than a TED Talk snapshot of present-day reality. As Foundation Center's Glasspockets team revealed in its "Foundation Transparency Challenge" infographic last November, only 10 percent of foundations today have a website, and not even our smartphones are  smart enough to connect you to the 90 percent of those that don't.

The Foundation Transparency Challenge reveals other areas of potential improvement for institutional philanthropy, including a number of transparency practices not widely embraced by the majority of funders. Indeed, the data we've collected demonstrates that philanthropy is weakest when it comes to creating communities of shared learning, with fewer than half the foundations with a Glasspockets profile using their websites to share what they are learning, only 22 percent sharing how they assess their own performance, and only 12 percent revealing details about their strategic plan.

Foundation Center data also tells us that foundations annually make an average of $5.4 billion in grants for knowledge-production activities such as evaluations, white papers, and case studies. Yet only a small fraction of foundations actively share the knowledge assets that result from those grants — and far fewer share them under an open license or through an open repository. For a field that is focused on investing in ideas — and not shy about asking grantees to report on the progress of these ideas — there is much potential here to open up our knowledge to peers and practitioners who, like so many of us, are looking for new ideas and new approaches to urgent, persistent problems.

Continue reading »

Professional Preparation: A "Value Add" for Educators and Their Employers

February 09, 2017

In October 2016, the Jim Joseph Foundation released a final evaluation conducted by American Institutes for Research of its Education Initiative — in three top-rated Jewish education institutions — Hebrew Union College-Jewish Institute of Religion (HUC-JIR), the Jewish Theological Seminary (JTS), and Yeshiva University (YU) — to increase the number of educators and educational leaders who are prepared to design and implement high-quality Jewish education programs. The foundation and AIR have shared some of the key findings and lessons learned from the initiative. AIR also is releasing a series of blogs that delve more deeply into important findings from the evaluation — the second of which, below, discusses the value of professional preparation programs and the key characteristics that distinguish those programs as excellent.

Jim_joseph_foundationWhether in a classroom, at a camp, at locations in a city, or in nearly any other environment, effective Jewish learning experiences can enrich lives and help cultivate deep, long-lasting relationships among participants. Over the last two decades especially, Jewish education and engagement experiences developed for teens and young adults have focused on opportunities to create peer communities and friendships, develop leadership skills, and strengthen cultural and religious beliefs while enabling youth to voice their opinions and serve their communities. An important aspect of many of these initiatives is a high level of accessibility and inclusiveness, so that people of various backgrounds and differing levels of prior engagement in Jewish life feel valued, respected, and welcomed.

A Need to Raise the Bar

With the growing popularity of these offerings, both by well-established organizations and in the form of innovative projects, there is an urgent need for the professionalization of individuals responsible for designing, conducting outreach for, and facilitating them. Jewish Community Centers (JCCs), congregations, youth groups, camps, Hillel, and social justice organizations in particular offer many of these experiences — and as a result are driving increased demand for talented, well-trained professionals eager to work in this space.

At the moment, however, no degree requirement exists for individuals tasked with delivering such influential Jewish experiences. The Jim Joseph Foundation's Education Initiative, a recently completed $45 million, six-year investment in three top-rated Jewish education institutions — Hebrew Union College-Jewish Institute of Religion (HUC-JIR), the Jewish Theological Seminary (JTS), and Yeshiva University — in part aimed to fill this void by increasing opportunities for and improving access to professional preparation programs for educators, aspiring leaders, middle managers, and directors and executive directors in the field of Jewish education. The initiative was based on the premise that higher education institutions are uniquely equipped to promote the research-based knowledge and decision-making tools needed by professionals to design and deliver a range of excellent educational practices for a particular age group in different settings.

We previously shared other key outcomes and findings of the initiative, including the number of new educators trained and new training programs developed. Now, we want to home in on the value of professional preparation for the individuals and organizations that offer Jewish learning experiences.

Continue reading »

Most Popular PhilanTopic Posts in 2016

December 30, 2016

So it ends, not with a bang but a whimper. Depending on whom you speak to, 2016 was a train wreck, a dumpster fire, a sure sign of the apocalypse, and just plain weird. If it was a year in which too many beloved cultural icons left us, it was also an annus horribilis for progressives, who will have to work twice as hard in the new year (and beyond) to preserve important policy gains achieved over the last eight years and limit the harm caused by a Trump administration and a Republican-controlled Congress.

But while our attention often was focused elsewhere, many of you were taking care of business and digging deep into the PhilanTopic archives for tools and ideas you could use — today and in the weeks and months to come. So, without further preamble, here are the ten posts you "voted" as your favorites in 2016. Enjoy. Happy New Year. And don't forget to check back next week, as we return to the office tanned, rested, and ready to fight the good fight.

What have you read/watched/heard lately that got your attention, made you think, or gave you a reason to feel hopeful? Feel free to share with our readers in the comments section below. Or drop us a line at mfn@foundationcenter.org.

4 Performance Measurement Mistakes You Don't Want to Make

May 05, 2016

Warning-286x300Performance management can be a tricky beast — hugely important, but difficult to get right. Here are four common mistakes my team and I see made by social, government, and nonprofit organizations trying to measure their impact, and tips on how to avoid them:

1. Measuring too much. By far the most common problem we see is that most organizations try to measure too much. Every additional measure you track uses up precious staff time for collection, aggregation, and analysis. In some cases, tracking too many measures is as almost as bad as not tracking at all. One client we served had a list of more than eight measures it was trying to track. Managers and the board were so overwhelmed by the huge amount of information that their eyes tended to glaze over when the data was presented, and little or nothing happened as a result. We helped them whittle the list down to just a few outcome measures for each client group, and that enabled them to focus their energy, track their efforts in a meaningful way, and improve their outcomes.

2. Underutilizing what you have. Many organizations are so busy worrying about measurement that they don't realize what a trove of information they may already be sitting on. One national nonprofit I know had been working on putting together a measurement system for three years, engaging external consultants, and doing a lot of hand-wringing about their lack of a large-scale control study. Its senior leaders, like those at many other organizations, found themselves overwhelmed by choices, confused by terminology, and with little to show for their hard work. Yet in the background, the organization had been collecting all kinds of information. With an infusion of new energy, leadership took stock and found that simply by undertaking an audit and tidying up the organization's data they were able to tell a compelling story to current and potential funders. The moral of the story? Before you do anything else, investigate what you have at hand. What information are you already collecting that measures outcomes for your clients?

Continue reading »

Evidence at the Crossroads: The Next Generation of Evidence-Based Policy

March 28, 2016

US CapitolWhen we began our "Evidence at the Crossroads" blog series, we posited that evidence-based policy making was at a crossroads. In the past six months — despite rancorous partisan debates and a fierce presidential primary season — Congress surprised everyone and passed the long overdue re-authorization of the Elementary and Secondary Education Act, with strong support from both parties.

The Every Student Succeeds Act (ESSA) includes over eighty mentions of "evidence" and "evidence-based," and a devolution of power to states and districts to implement those provisions. And earlier this month, the Evidence-Based Policymaking Commission Act, sponsored by Rep. Paul Ryan (R-WI) and Sen. Patty Murray (D-WA), was approved by the Senate and the House in another display of cooperation.

It is promising that at a time of heightened political rancor, evidence-based policy is finding bipartisan support. But the road ahead is still tenuous, and much will depend on whether the evidence movement can evolve. Here, I draw on the terrific ideas and insights from the authors of the series to suggest three steps for moving forward: focus on improvement, attend to bodies of evidence, and build state and local capacity for evidence use.

Focus on improvement

It's time to position evidence-based policy as a learning endeavor. Implementing and scaling interventions in different contexts with diverse groups is notoriously challenging. Promising results are emerging, but not all are home runs. The history of evaluation research shows that most evaluations yield mixed or null results, and this generation of studies will produce the same. Interventions work in some places for some people, but not others. Even new studies of established interventions turn up findings that are inconsistent with prior studies. What should we make of these results?

One direction we should not take is to obscure these findings or pretend they don't exist. I fear that already happens too often. The rhetoric of the What Works agenda — funding more of what works and less of what doesn't — has created an environment that pressures program developers to portray home run results, communications engines to spin findings, and evaluation reports to become more convoluted and harder to interpret.

Improvement could be the North Star for the next generation of the evidence movement. The idea of building and using evidence simply to sift through what works and what doesn’t is wasteful and leaves us disappointed. We need to find ways to improve programs, practices, and systems in order to achieve better outcomes at scale. Let's not be too hasty in abandoning approaches that do not instantly pay off and instead learn from the investments that have been made. After all, many established interventions had years to gestate, learn from evidence, and improve. Let's not cut short this process for new innovations that are just starting out.

This is not to say that anything goes. Patrick McCarthy reminds us that when research evidence consistently shows that a policy or program doesn't work — or even produces harm — it should be discontinued. Indeed, the next generation of evidence-based policy will need to aim toward improvement while keeping an eye on whether progress is being made.

Attend to bodies of evidence

If evidence-based policy is to realize its potential to improve the systems in which young people learn, grow, and receive care, we need to rely on bodies of research evidence. Too often, public systems are pressured to seek silver bullet solutions. A focus on single studies of program effectiveness encourages this way of thinking. But, as Mark Lipsey writes, "multiple studies are needed to support generalization beyond the idiosyncrasies of a single study." Just as a narrow aperture can exclude the important context of an image, so too does focusing on a narrow set of findings exclude the larger body of knowledge that can inform efforts to improve outcomes at scale.

State and local leaders need to draw on bodies of research evidence. This includes not only studies of what works, but of what works for whom, under what conditions, and at what cost. What Works evidence typically reflects the average impact of an intervention in the places where it was evaluated. For decision makers in other localities, that evidence is only somewhat useful. States and localities ultimately need to know whether the intervention will work in their communities, under their operating conditions, and given their resources. Evidence-based policy needs to address those questions.

To meet decision makers' varied evidence needs, the evidence movement also needs to focus greater and more nuanced attention to implementation research. Real-world implementation creates tension between strict adherence to program models and the need to adapt them to local systems. To address this tension, we need to build a more robust evidence base on key implementation issues, such as how much staffing or training is required, how resources should be allocated, and how to align new interventions with existing programs and systems. As Barbara Goodson and Don Peurach argue, we have built a powerful infrastructure for building evidence of program impacts, but we need to match it with equally robust structures for implementation evidence.

And finally, the evidence-based policy movement needs to recognize the importance of descriptive and measurement research that helps local decision makers better understand the particular challenges they are facing and better judge whether existing interventions are well suited to address those problems. For those needs assessments, descriptive and measurement studies can be critical.

Build state and local capacity

As decision making devolves to states and localities, the way the federal government defines its role will also change. In the wake of ESSA, officials in Congress and the U.S. Department of Education are aiming to move beyond top-down compliance. But to do so they will need to identify new means to support states, districts, and practitioners in the evidence agenda. States and localities are not mere implementers of federal policies, nor are they simply sites of experimentation. A key way to foster the success of the evidence movement is to support the capacity of state and local decision makers to build and use evidence to improve their systems and outcomes.

Technical assistance is one way that the federal government can support capacity, and it'll be important to direct technical assistance to state and local decision makers and grantees in productive ways. While tiered evidence initiatives such as i3 have provided grantees with technical assistance to conduct rigorous impact evaluations, assistance has focused less on other key issues: helping grantees apply continuous improvement principles and practices, vet and partner with external evaluators, and build productive collaborations with districts and other local agencies to implement programs.

Providing technical assistance in these areas would increase the ultimate success of these evidence-based initiatives.

Research-practice partnerships (RPPs) are another way to support state and local agencies. In education, these long-terms partnerships can provide the research infrastructure that is lacking in many states and districts as they seek to implement the evidence provisions in the Every Student Succeeds Act. RPPs can help districts and schools interpret the existing evidence base and discern which interventions are best aligned with their needs. In instances where the evidence base is lacking, RPPs are poised to conduct ongoing research to evaluate the interventions that are put into place. Similarly, in child welfare, research-practice partnerships could provide states with additional capacity as they develop Title IV-E Waiver Demonstration Projects to test new approaches for delivering and financing services in order to improve child and family outcomes.

The federal government is perhaps uniquely situated to build and harness research evidence, so that what is learned in one place need not be reinvented in another and the lessons accumulate. Mark Lipsey suggests that federally funded research require the collection and reporting of common data elements so that individual studies can be synthesized. Don Peurach imagines ways the federal government can support an "improvement infrastructure." We should consider these ideas and others as we move forward.

Foundations also have a role. Private funders are able to support learning in ways that are harder for the federal government to do. The William T. Grant and Spencer foundations' i3 learning community, for example, provided a venue for program developers to share the challenges they faced in scaling their programs and to problem solve with one another. In another learning community, our foundation supported a network of federal research and evaluation staff across various agencies and offices to learn from each other. A learning community requires candor and can provide a safe and open environment to identify challenges and generate solutions. Foundations can also produce tools and share models that states and localities can draw upon in using evidence. With fewer bureaucratic hurdles, we can often do this with greater speed than the federal government.

Realizing the potential of evidence in policymaking

The ascendance of research evidence in policy in the past two decades gave way to investments in innovation, experimentation, and evaluation that signaled great progress in the way our nation responds to its challenges. But for all the progress we've made in building and using evidence of What Works, we've also been left with blind spots. As a researcher, I did not enter my line of work expecting simple answers. Quite the opposite, in fact. Researchers, policy makers, and practitioners know that there is always more to learn than yes or no; more at stake than thumbs up or thumbs down. We build and use research evidence not just to identify what works, but to strengthen and improve programs and systems — to build knowledge that can improve kids' lives and better their chances to get ahead.

As we approach the next generation of evidence-based policy, it's essential we take steps to ensure that practitioners and decision makers at the state and local level have the support they need.

Headshot_vivian_tsengThe above post by Vivian Tseng, vice president, program, at the William T. Grant Foundation, is the eleventh and final post in the foundation's "Evidence at the Crossroads" series, in which it sought to provoke discussion and debate about the state of evidence use in policy, with a focus on federal efforts  to build and use evidence of What Works. It is reprinted here with permission of the foundation. You can read other posts in the series here and/or register for a free event co-sponsored by the foundation, "Building State and Local Capacity for Evidence-Based Policymaking," in Washington, D.C., on March 30.

Most Popular PhilanTopic Posts (February 2016)

March 01, 2016

A couple of infographics, a book review by Matt, a short Q&A with the MacArthur Foundation's Laurie Garduque, an oldie but goodie from Michael Edwards, and great posts from Blake Groves and Ann Canela — February's offerings here on PhilanTopic beautifully capture the breadth and multiplicity of the social sector. Now if we could only get it to snow....

What did you read/watch/listen to last month that made you think, got you riled up, or restored your faith in humanity? Share with the rest of us in the comments section below, or drop us a line at mfn@foundationcenter.org.

A Different Kind of Risk-Taking: Improving Evaluation Practice at the Jim Joseph Foundation

September 15, 2015

Evaluation"We're in the business of risk-taking," is something Chip Edelsberg, executive director of the Jim Joseph Foundation, likes to say. Generally speaking, Edelsberg's notion of risk-taking refers to the investments the foundation makes in its grantees and their programs. The mission of the  foundation,  which has assets of roughly $1 billion, is to foster compelling, effective Jewish learning experiences for young Jews. Between 2006 and June 2014, the foundation granted more than $300 million to increase the number and quality of Jewish educators, expand opportunities for Jewish learning, and build a strong field for Jewish learning (Jim Joseph Foundation, 2014). Rarely is there an established research base for the kinds of initiatives the foundation supports in Jewish education. In the spring of 2013, though, Edelsberg had another kind of risk in mind.

What might be gained, Edelsberg wondered, if foundation staff brought together a group of competing evaluation firms with whom they had worked in the past to consider ways to improve the foundation's practice and use of evaluation? The idea had emerged out of a study of the foundation's evaluation practices, from the foundation's inception in 2006 through 2012, that was commissioned by the foundation and conducted by Lee Shulman, president emeritus of the Carnegie Foundation for the Advancement of Teaching and Charles E. Ducommun Professor of Education Emeritus at Stanford University. Edelsberg thought it was a risk worth taking, and the board of the foundation agreed. Edelsberg also made the bold decision to allow a doctoral student in evaluation studies at the University of Minnesota to study the venture.

In the winter of 2013, a colleague of mine from the field of Jewish education who was then a staff member at the foundation heard about my research interest in the role evaluation plays in the work of foundations and their grantees and offered to connect me with Edelsberg. Edelsberg described the idea for what became the "evaluators' consortium," and I asked about the possibility of studying the process as a case study for my dissertation. By the time the consortium met for the first time in October 2013, and with the agreement of the foundation's board and participating evaluators, I launched the research. The purpose of the study was to explore what occurred when a foundation inaugurated an innovative approach to evaluation practice, examining factors that supported successful implementation of the innovation and the impediments to its success. It also sought to provide insights into the elements of organizational culture, practices, circumstances, and structures that can support effective practices of evaluation in the foundation field. The foundation gave me access to documents and invited me to observe meetings of the consortium held both in person and electronically. Over the course of the first year of the consortium's operation, I interviewed all foundation program staff members, Shulman (who served as the facilitator), a member of the board, and each of the participating evaluators.

Continue reading »

Weekend Link Roundup (April 25-26, 2015)

April 26, 2015

Ss-150425-nepal-earthquake-09Our weekly roundup of noteworthy items from and about the social sector. For more links to great content, follow us on Twitter at @pndblog....

Disaster Relief

In the aftermath of a major natural disaster like the powerful earthquake that struck Nepal yesterday, early assistance -- in the form of money -- is the best and most effective kind of assistance. On her Nonprofit Charitable Orgs blog, Joanne Fritz shares other ways to help victims of a natural disaster.

Nearly $10 billion in relief and reconstruction aid was committed to Haiti after the devastating January 2010 earthquake in that impoverished country. Where did it all go? VICE on HBO Correspondent Vikram Gandhi reports.

Education

Has the education reform movement peaked? According to em>New York Times columnist Nick Kristof, "The zillionaires [who have funded the movement] are bruised. The idealists are dispirited. The number of young people applying for Teach for America, after 15 years of growth, has dropped for the last two years. The Common Core curriculum is now an orphan, with politicians vigorously denying paternity." Which is why, says Kristof, it might be time to "refocus some reformist passions on early childhood."

Evaluation

On the Center for Effective Philanthropy blog, Johanna Morariu, director of the Innovation Network, shares five grantmaker and nonprofit practices "that undermine or limit the ability of nonprofit organizations to fully engage in evaluation."

Fundraising

What is social fundraising? Liz Ragland, senior content and marketing associate at Network for Good, explains.

Nonprofit With Balls blogger and Game of Thrones fan Vu Le has some issues with the donor-centric model of fundraising. "When [it's] done right," he writes, "it’s cool; when it’s done wrong, we sound like the used car salesmen of justice...."

Continue reading »

Weekend Link Roundup (April 18-19, 2015)

April 19, 2015

National-cherry-blossom-festivalOur weekly roundup of noteworthy items from and about the social sector. For more links to great content, follow us on Twitter at @pndblog....

Data

How can nonprofits use data to create a culture of continuous improvement. Beth Kanter explains.

Evaluation/Effectiveness

In a post on her Giving Evidence site, Caroline Fiennes suggests that charities are being asked to do too much evaluation -- and presents some evidence to support her argument.

Writing on the Center for Effective Philanthropy blog, Nancy Baughman Csuti, director of research, evaluation and strategic learning at the Colorado Trust, says that funders can and should

engage in deeper conversations with grantees to understand their needs regarding evaluation, continue to provide general operating support, and, with that, encourage time to review results, reflect, and adapt. We can encourage grantees to share what they have learned and provide resources and assistance for them to do so, and do the same ourselves. As funders, we should jump on the opportunities to encourage our grantees to embrace a culture of evaluation and learning that results in seeing problems and solutions differently. And always, we must do ourselves what we ask of grantees....

Human/Civil Rights

Civil society and human rights groups find themselves in a new world characterized by "multiplicity," public disillusionment, and growing non-institutional activism, writes Lucia Nader on the Transformation  blog. And if they want to remain relevant, she adds, they'll need to find a balance "between preserving what has already been achieved, and deconstructing, innovating, reinventing and transforming [themselves]."

Journalism/Media

Is the nonprofit news model sustainable? Based on his reading of Gaining Ground, How Nonprofit News Ventures Seek Sustainability, a new report from the John S. and James L. Knight Foundation, Inside Philanthropy's Paul M.J. Sucheki has his doubts.

Nonprofits

$23.07/hr. That's Independent Sector's latest estimate of the value of volunteer time. More here

Continue reading »

[Infographic] 10 Traits That Make Nonprofits Great

March 21, 2015

This week's infographic, courtesy of the Horatio Alger Association, a nonprofit educational organization "established in 1947 to dispel the mounting belief among the nation's youth that the American dream was no longer attainable," doesn't break any ground when it comes to the traits that make nonprofits great. These are things all nonprofits need to (rather than should) do if they hope to succeed over the long term. But while some (#4, #6 and #9) are more important than others, all contain at least a kernel of good advice....

Continue reading »

Making Philanthropic Investments Last: The Role of Financial Sustainability

October 30, 2014

Headshot_schneider_kidron_300x600Launched in 2010, the Jim Joseph Foundation's Education Initiative has supported the development and expansion of eighteen degree and certificate programs as well as leadership institutes at Hebrew Union College-Jewish Institute of Religion (HUC-JIR), the Jewish Theological Seminary (JTS), and Yeshiva University (YU).

The foundation provided the resources needed for program development, staffing, student tuition assistance, and marketing/recruitment activities. The investment was substantial – each institution received $15 million over a period of up to six years. As part of its independent evaluation of the initiative, American Institutes for Research (AIR) assessed not only how well the three grantees delivered these programs, but how they planned to financially sustain their programs into the future after the foundation's investment wound down.

Financial sustainability requires careful planning, typically using a dynamic document that is reviewed and revisited periodically. Such a document – the financial sustainability plan – describes strategies to contain costs and to cover them through fundraising and program revenues.

Informing Financial Sustainability Plans Through Break-Even Analysis

A common tool in financial planning is break-even analysis, which identifies the circumstances in which costs and revenues are balanced. To help Jim Joseph Foundation Education Initiative grantees, we developed a program-level Break-Even Analysis Calculator, allowing program administrators to project revenues and expenditures by changing variables such as tuition, numbers of students, and staffing levels. This interactive tool can be used to:

  1. Identify the resources required to implement a program, including personnel, facilities, equipment, and materials, whether paid for directly or contributed in-kind, and subsequently to calculate program costs.
  2. Explore ways to reduce costs.
  3. Identify the effects of different levels of tuition and scholarships.
  4. Calculate fundraising needs and demonstrate to potential funders why their help is needed.

Review of Financial Sustainability Plans

We created benchmarks for reviewing the financial sustainability plans submitted by each institution. The four criteria described below are based on the assumption that financial sustainability is a process, not an end. In other words, although the process aimed at achieving financial sustainability may not yet be completed, the financial sustainability plan contributes to a road map that programs can follow into the future.

Continue reading »

Weekend Link Roundup (June 7-8, 2014)

June 08, 2014

World Cup_logoOur weekly roundup of new and noteworthy items from and about the nonprofit sector....

Climate Change

On the Bloomberg View site, Cass Sunstein, the Felix Frankfurter professor of law at Harvard University, provides three rebuttals to the so-called Sophisticated Objection of the fossil fuel lobby and its supporters, an argument which acknowledges that while climate change is a serious problem, unilateral action by any country will impose significant costs without producing significant benefits.

Data

On the Markets for Good blog, Lucy Bernholz suggests it's time we started thinking more seriously about how to "collect, organize, govern, store, share, and destroy digital data for public benefit" – and offers a couple of "deliberately half-baked" ideas to get us started.

"Good data practice is not just about the technical skills," writes Beth Kanter on her blog. "There is a human side [as well].  It is found between the dashboard and the chair. It includes organizational culture and its influence on decision-making – from consensus building on indicators, agility in responding to data with action, and sense-making. It is the human side that helps nonprofits use  their data for learning and continuous improvement." 

Education

On the Inside Philanthropy site, L.S. Hall weighs in with a surprisingly generous consideration of the education philanthropy of Facebook co-founder Mark Zuckerberg and his wife, Priscilla Chan.

Evaluation

Nancy Roob, president and CEO of the Edna McConnell Clark Foundation, argues in a post on the Stanford Social Innovation Review blog that while fears of rigorous evaluation are "justifiable," a broader perspective on the purposes of evaluation can help allay them.

Continue reading »

Contributors

Quote of the Week

  • "The definition of insanity is doing the same thing over and over and expecting different results...."

    — Albert Einstein (1879-1955)

Subscribe to Philantopic

Contributors

Guest Contributors

  • Laura Cronin
  • Derrick Feldmann
  • Thaler Pekar
  • Kathryn Pyle
  • Nick Scott
  • Allison Shirk

Tweets from @PNDBLOG

Follow us »

Archives

Other Blogs

Tags