September 15, 2015
"We're in the business of risk-taking," is something Chip Edelsberg, executive director of the Jim Joseph Foundation, likes to say. Generally speaking, Edelsberg's notion of risk-taking refers to the investments the foundation makes in its grantees and their programs. The mission of the foundation, which has assets of roughly $1 billion, is to foster compelling, effective Jewish learning experiences for young Jews. Between 2006 and June 2014, the foundation granted more than $300 million to increase the number and quality of Jewish educators, expand opportunities for Jewish learning, and build a strong field for Jewish learning (Jim Joseph Foundation, 2014). Rarely is there an established research base for the kinds of initiatives the foundation supports in Jewish education. In the spring of 2013, though, Edelsberg had another kind of risk in mind.
What might be gained, Edelsberg wondered, if foundation staff brought together a group of competing evaluation firms with whom they had worked in the past to consider ways to improve the foundation's practice and use of evaluation? The idea had emerged out of a study of the foundation's evaluation practices, from the foundation's inception in 2006 through 2012, that was commissioned by the foundation and conducted by Lee Shulman, president emeritus of the Carnegie Foundation for the Advancement of Teaching and Charles E. Ducommun Professor of Education Emeritus at Stanford University. Edelsberg thought it was a risk worth taking, and the board of the foundation agreed. Edelsberg also made the bold decision to allow a doctoral student in evaluation studies at the University of Minnesota to study the venture.
In the winter of 2013, a colleague of mine from the field of Jewish education who was then a staff member at the foundation heard about my research interest in the role evaluation plays in the work of foundations and their grantees and offered to connect me with Edelsberg. Edelsberg described the idea for what became the "evaluators' consortium," and I asked about the possibility of studying the process as a case study for my dissertation. By the time the consortium met for the first time in October 2013, and with the agreement of the foundation's board and participating evaluators, I launched the research. The purpose of the study was to explore what occurred when a foundation inaugurated an innovative approach to evaluation practice, examining factors that supported successful implementation of the innovation and the impediments to its success. It also sought to provide insights into the elements of organizational culture, practices, circumstances, and structures that can support effective practices of evaluation in the foundation field. The foundation gave me access to documents and invited me to observe meetings of the consortium held both in person and electronically. Over the course of the first year of the consortium's operation, I interviewed all foundation program staff members, Shulman (who served as the facilitator), a member of the board, and each of the participating evaluators.