Welcome to the Community Pages of the Developmental Evaluation Toolkit. The pages in this section have been created by evaluators and partners of developmental evaluation who want to share their experiences with developmental evaluation. Please contact us if you’re interested in contributing to the Community Pages
Contributed by Kathleen Holmes and Melissa Logsdon of the Missouri Foundation for Health
When the Missouri Foundation for Health decided to focus on decreasing infant mortality rates, we knew our approach needed to be different from what we had done before. Supporting effective programs and services hadn’t solved the problem – something bolder was needed. We decided to use a collective impact model, funding backbone organizations in two sites, St. Louis and the Bootheel of Missouri, to create a movement for change.
I’m sure many of you have had that moment, where you realize the problem you’re facing needs a whole new type of solution. For the Foundation, understanding we needed to do things differently also made us think about how we could use evaluation differently than in the past.
We chose to use developmental evaluation, with a vision that we could use evaluative information to guide decisions about what needs to happen next and explore whether and how it’s working.
However, finding the right developmental evaluators was a serious undertaking for us. They aren’t on every street corner, at least not in the Midwest! After talking with leading evaluators around the country and other foundations, we brought in Spark Policy Institute and the Center for Evaluation Innovation as coaches to build our capacity, the capacity of our grantees, and most recently, the capacity of two local evaluators who are excited to learn how to be developmental evaluators.
Our evaluation coaches worked with us to build our understanding of key concepts, like emergent learning, complexity, systems building and mapping, cognitive traps and levels of decision-making. We have adopted emergent learning practices in our own work, including using before action reviews to plan important meetings within the strategy and in our other work.
Our coaches worked with our grantees to facilitate a dialogue to explore what elements of infant mortality are simple, complicated and complex and then transitioned to talking about different types of developmental evaluation questions we might want to answer. The dialogue resulted in each site crafting evaluation questions specific to the challenges they were facing.
The coaches also engaged in data collection, analysis and shared interpretation with the two sites. They worked with the St. Louis site to answer the question: What is a process and structure for engaging stakeholders – how can we best stage the engagement and motivate participation?
They worked with the Bootheel site, where two organizations are jointly serving as backbones, to answer the question: How do potential partners (including within the backbone organizations) view and prioritize infant mortality, including such things as their values, views about causes of infant mortality, needs and barriers, and experiences working on infant mortality? Part of what they were exploring was the extent to which different stakeholders viewed the problem as a systemic issue versus a healthcare system or individual responsibility issue.
The grantee staff were supported to interpret the results of the developmental evaluation questions and apply the learning to their strategies. The backbone staff in St. Louis highlighted how they have been able to use the information to design and adapt strategy. In the Bootheel, the evaluation results led to continued engagement with the developmental evaluation coaches around their decision-making models and creating ways of reaching consensus.
We have had many lessons learned from our year of developmental evaluation, perhaps the greatest of which is the need for a developmental evaluator who is local. We and our evaluation coaches have come to the conclusion that, while the most experienced developmental evaluator might not be available locally, there are some negative consequences to engaging someone from a distance. While relationships can be built from afar and trust developed through periodic in-person meetings, when the really tough issues come up (like racism as a driver of infant mortality and whether and how to address it), an embedded local developmental evaluator who can attend meetings and participate more readily in in-person one-on-ones might be better positioned to help work through the issue.
Our other lessons learned include:
- Because developmental evaluation is a fairly new approach, how you introduce it to the participants is important. It’s critical to provide them with something they can concretely engage with and use, versus providing mostly theory up front.
- Early in a collective impact effort, there are so many moving parts and such a high level of uncertainty that developmental evaluation is both critical, but also difficult to prioritize. Moving quickly to a practical, useful result from the evaluation is necessary to build the level of priority.
- Participants need to build their understanding of developmental evaluation in order to know how to engage with it. It can’t be happening in isolation, off to the side, but rather needs to steadily intersect with the work.
- It’s important to select an evaluator who has experience being adaptive and flexible, open to changing scopes of work and able to be your thought-partner in the effort, rather than your contractor or an outside observer.
- Developmental evaluation is not a standalone approach. It relies on other tools, such as systems mapping, emergent learning and complexity theory to bring a richer understanding to the work.
For more information about our initiative, please visit our website at: https://www.mffh.org/content/741/infant-mortality.aspx.
Submitted by Spark Policy Institute
EarthCube is a community-led cyberinfrastructure initiative that is attempting to develop a new way of advancing earth sciences through the unprecedented sharing of data. EarthCube depends on both the emergence of new cyberinfrastructure and on triggering changes in the scientific process. The National Science Foundation initiated EarthCube to increase geoscientists’ contributions to a more sustainable future through improvements in our understanding of the Earth as a complex and changing planet.
The developmental evaluator’s role with EarthCube is supporting the development of a community-driven governance structure that builds the trust and buy-in needed to achieve the level of voluntary, uncompensated participation needed to make EarthCube successful. Developmental Evaluation (DE) fit the need in part because of the dynamic and complex environment, but also because EarthCube’s purpose is to disrupt how science functions and create a new, 21st century model of science.
Developmental Evaluation Process
As of late 2014, Spark was one year into a two year DE process. The DE team included a lead evaluator and others who supported data collection and analysis. They began the DE process with one-on-one conversations with nearly 20 key stakeholders and used that data to inform the first emergent learning (EL) dialogue, which surfaced uncertainty about how to build buy-in to the governance process. This led to a request for the DE team to reanalyze another researcher’s existing stakeholder perceptions survey data that had a wealth of information that could be brought together into a cluster analysis to generate a new way of thinking about the audiences of EarthCube.
Impact of Developmental Evaluation: The initial emergent learning dialogue resulted in a shift from messaging that unintentionally implied, “We’re EarthCube and we’re here to help” to a message that explained, “We’re EarthCube and we need your help.” Participants felt the shift better aligned with academic culture and the likely motivators for participation.
Once the EarthCube Governance leaders had built momentum and recruited stakeholders into their process, they hosted a series of 2.5 day convenings of relatively homogenous stakeholders to understand how they wanted to be engaged in the governance and leadership of EarthCube. The DE role included co-facilitating the convenings, with a specific focus on facilitating intense period debriefs at lunch and dinner of each day, capturing and using data in real time, and ultimately creating space for leaders and other participants to actively reshape the agenda in order to achieve either the originally desired meeting outcomes or emergent outcomes. From a DE perspective, the convenings required steady attention to power dynamics, cognitive biases in how decisions were being made, team performance issues, and more. Stakeholder interviews between meetings supplemented the learning occurring during the meetings, including surfacing some of the competing tensions that had to be resolved by the governance process.
Impact of Developmental Evaluation: The real-time learning at the convenings resulted not only in adjustments to the agenda, but also changes to expectations about how far each community of stakeholders could be moved based on assessments of their readiness and a shift from a top-down governance model to more of a “commons” (collective impact) approach.
The DE process switched to leveraging ideas from a design-thinking approach in order to engage participants in exploring the implications of their first draft of the governance structure. During the “All Hands Meeting” of over 100 stakeholders, the evaluation team designed and facilitated simulations and visualization strategies to test multiple variations of the governance model against the priority problems that participants’ identified as part of EarthCube’s upcoming agenda.
Impact of Developmental Evaluation: The simulations and visualizations led to greater clarity in how governance components can intersect to achieve EarthCube’s vision, including how to leverage the adaptive and emergent aspects of the governance model.
After the meeting, a draft governance structure was implemented and the DE role switched again, this time to providing a steady flow of learning and feedback loops during two six month “sprints” where the design will be tested while being implemented. The feedback loops focus on understanding internal/external legitimacy, trust, quality of decision-making processes, stakeholder support of decisions being made, and transparency. They are also exploring the impact of the relationship between EarthCube Governance and NSF, including the difficult balance of creating space for a community driven process while maintaining NSF’s traditional level of oversight and involvement.
For more information about EarthCube, please visit www.earthcube.org.
Submitted by the OMG Center for Collaborative Learning
In 2009, the Bill & Melinda Gates Foundation invested more than $20 million in the Community Partnerships portfolio, with the goal of doubling by 2025 the number of low-income students who earn a postsecondary degree or a credential with genuine value in the workplace by age 26. The objective was to understand what it takes for cross-sector partnerships to advance a community-wide postsecondary completion agenda that instigates system-level changes and ultimately improves postsecondary completion outcomes for students.
From 2009-2013, seven communities received Community Partnerships funding through two sister initiatives – Communities Learning in Partnership (CLIP) and Partners for Postsecondary Success (PPS) – to develop and implement a multi-sector strategy that included community and four-year colleges, K-12 school districts, municipal leaders, local businesses, community-based organizations, parents and students, and others. Communities also received support from an intermediary partner who provided technical assistance and coaching support throughout the grant period: the National League of Cities’ Institute for Youth, Education, and Families (CLIP) and MDC (PPS). An additional eight communities were involved in the portfolio as affiliate cities, participating in regular convenings, phone calls, and webinars with the seven implementation sites.
The Developmental Evaluation
Developmental evaluation is particularly well suited for initiatives that are highly innovative, in the early stages of development, or that occur in complex and/or shifting environments. From the beginning, the Community Partnerships sites used a loosely defined Theory of Change , which stipulated that cross-sector partnerships would use data and leverage key stakeholder commitment to align policies and practices to promote postsecondary success. In other words, evidence of systems change would emerge across four mutually reinforcing areas: building commitment among stakeholders, using data, strengthening partnerships, and aligning policies and practices. If we saw evidence of change across these four areas, then we would know that the “system” had in fact shifted. It was entirely up to the selected communities, armed with deep knowledge about their local context, to make sense of these four “buckets” and shape the work as they saw fit.
As the evaluation partner, the OMG Center remained in near-constant contact with the grantees, intermediary partners, and funder. Given the scope of this engagement, we structured our project team so that each member could focus in depth on several communities. The team members traveled on a number of occasions to their assigned sites – visiting regularly with key stakeholders, attending partnership meetings, and building relationships with site leadership from postsecondary institutions, local government agencies, community foundations, and other local organizations.
Through these visits, we gained a deeper understanding of the culture and context of each community partnership, and that knowledge enabled us to more accurately document communities’ approaches, share what we observed, and fine tune strategies “on the ground.” This knowledge also informed our approach to asking hard questions, elevating themes, helping partners understand our findings, and together refining the Theory of Change based on the reality of change. Over the course of the evaluation, our understanding of the effective strategies evolved, as did our understanding of how best to measure systemic shifts resulting from the Community Partnerships investment.
In many ways, developmental evaluation helps stakeholders piece together a puzzle without the benefit of a defined picture to guide their efforts. Furthermore, it helps to bring that picture into clearer focus for future investments. In the Community Partnerships evaluation, a developmental approach provided us with the nimble and responsive path necessary to understand how local innovation occurred, and how communities tackled a complex systems change agenda.
- Bill & Melinda Gates Foundation Community Partnerships Portfolio Issue Briefs Series.
- Bill & Melinda Gates Foundation Community Partnerships Portfolio Final Evaluation Report.
- “Lessons from the Community Partnerships Portfolio,” a blog post for FSG, published by Justin Piff and Sarah Singer Quast of the OMG Center.
- “Embracing Emergence: How Collective Impact Addresses Complexity,” a Stanford Social Innovation Review article that features the OMG Center’s Community Partnerships work.
Spark Policy Institute does not endorse the practices encouraged by authors in their individual Community Pages. Copyright of materials on the Community Pages remains with the authors of each page and any references to the content should be cited to the authors.