Often, we view evaluation as a test, measuring whether we’ve achieved outcomes outlined in our original plan. This can make evaluation feel uncomfortable, even scary, rather than helpful and informative. However, using data to support innovation and adaptation is critical to making a meaningful difference, and integrating evaluation and learning into the work of the backbone is a vital part of this process. A dynamic backbone organization doesn’t have to be deeply knowledgeable about evaluation, but you should recognize the value and pull in strong thought partners to support learning along the way.
In efforts to achieve deep and sustainable change, measurement and evaluation are necessary for tracking a collaborative’s progress and learning about how to improve. All too often, however, the focus is on what a collaborative has accomplished as measured by a standard set of indicators such as whether or not a group has established a common vision or is actively sharing data. This is not to say there is no value in these indicators, they have a clear place in measuring a collaborative effort’s progress and outputs. However, these measures don’t provide insight into how milestones were achieved, and there is very little focus on process, internal dynamics, or external factors that contributed to the outcomes, failing to put systems change efforts into context.
- There are great resources available to backbones to check their own progress and outputs, for example, FSG’s blog on backbone effectiveness and their collective impact backbone measurement chart.
In order to effectively integrate learning into the work of the collaborative, backbone organizations need to go beyond these standard measures, exploring the deeper, underlying causes of critical outcomes. This requires building in time in backbone and collaborative meetings for reflection on data, collective interpretation of outcomes, and a commitment to asking qualitatively different questions. Rather than count how many, truly integrated evaluation and learning practices ask why. The sections below outline how to align evaluation with existing work, ask good learning questions, and use data to make informed decisions. A truly dynamic backbone can use these tools to meet not only standard reporting requirements, but also to creating space and commitment to learning that makes evaluation deeply beneficial (and a whole lot less scary!)
The first step in any evaluation design, large or small, is being clear about who will use the results of the evaluation and for what purpose. It is also critical to identify the level at which the evaluation will take place. You may also want to revisit (or create) the collaborative’s theory of change to clearly articulate not only the intended outcomes of your collaborative’s work, but what it will take to get there. The strategic roadmap approach described in the section on Vision and Governance can be used in the same way as a theory of change.
Example audiences who might use the results of the evaluation to make decisions include:
- Steering Committee or other leadership staff involved with the collaborative;
- Work groups and partners involved in implementing changes in the work;
- Backbone staff; and
- Funders and fundraisers.
Example purposes for evaluation include:
- Accountability and/or understanding the impact of the effort;
- Performance monitoring (did we do what we said we were going to do and what happened);
- Understanding the environmental context and its influence; and
- Periodic strategy improvement or real-time strategic learning.
In the context of systems change work, evaluation might focus at many different levels:
- The collaborative process (e.g. how the backbone is functioning, the quality of collaboration, the evidence of alignment among partners);
- The policy and systemic changes, not just whether they happened but progress toward them and what is helping that progress;
- The improved community or client level outcomes that result from these systemic changes and how the systems changes relate to those improved outcomes; and
- The outcomes of specific new programs or services that have been integrated into the system as a result of the collaborative’s work.
The backbone organization can take the lead in bringing partners together to decide on who will use evaluation information, for what purpose and at what level. While an evaluation expert is an important part of moving evaluation forward in a collaborative effort, there is much groundwork you can lay as a backbone to ensure you have the right type of evaluation to meet your collaborative’s needs.
A note about hiring an evaluator: Doing this type of background work before you bring in an evaluator can be helpful. The evaluator with the skills and knowledge to help you measure progress on systems change and improve that work is likely not the same evaluator as the one with strong program evaluation skills who can help you measure the impact of a new service.
Now that you have a focus, it’s time decide what questions your evaluation needs to answer in order to inform your decision-making. To do this, work with your partners to identify the types of decisions you need to make. Will you be making decisions about which partners are needed to advance a policy or systems change effort? Will you be deciding on which prototype or experimental idea you want to scale? The evaluation question for the former might focus on the environment the change has to happen within, who is credible, and who has influence over key decision-makers. The evaluation question for the latter might focus on the outcomes of the two prototypes and likely populations that could benefit from expanding the prototypes.
By taking the time to carefully craft the question you hope to answer, you are giving yourself direction on what information will be useful to further advance your work. As a backbone organization, you can introduce these core evaluation questions into discussions with your core team, steering committee, community leadership, etc.
Evaluation questions often start with:
- How did our…
- What happened when…
- What influenced…
- What changes did we…
- What patterns do we see…
Evaluation questions often include:
- …contributed to…
- …among our audience…
- …most likely to…
- …resulted from…
- …by [a specific date]…
- …within [a specific timeframe]…
To use data to inform decision-making doesn’t happen without intentionality behind it. It is not uncommon to have an evaluator present findings, have a great dialogue about them, and then have the group return to business as usual and make the same decisions they would have made had the evaluation not existed.
A strategic backbone organization can help integrate data into how groups make decisions. Data can supplement intuitive understanding with new perspectives and information to help validate the existence of a problem, discover new aspects of the problem, advocate for the problem to be solved, as well as surface and vet potential solutions. Decision-making supported with high-quality data can be more strategic by helping collaborative efforts and individual leaders direct resources to where the greatest impact is possible. Data-informed learning can and should occur in real-time as strategies are being implemented.
To integrate data into decision-making, you will need to make sure the questions being asked are actionable; the right people are involved in interpreting the data; you present the results in a way that makes them useful and accessible; and you facilitate the dialogue in a way that increases the likelihood that people will use the results.
- The Data as a Tool for Change Toolkit provides practical, how-to information on integrating data into decision-making.
Equity and inclusion should be a part of any evaluation plan and collaborative effort, because, as is well established, things that don’t get measured don’t get done.
For example, many organizations and funders talk about having diverse steering committees as an important part of a successful initiative, but how do you measure that? Do you know the racial and ethnic composition of your steering committee and, if not, how do you go about starting that conversation?
Another example is in how data is collected and analyzed. Are your common metrics disaggregated by race, gender, etc.? Including this level of detail upfront in your data collection plan will allow your group to talk about how or why some groups may be experiencing disparities and how you might change things in your collaborative to close the gap. Disaggregating the data by race and ethnicity, income, or even place (if you are able to collect that level of data), can also point towards gaps in reach and outcomes for your collective efforts and help engage new partners.
Funders are just as interested in knowing about the impact of their work as collaboratives are about their own work. This means they often require specific monitoring and tracking tied to the outcomes they are measuring. These reporting requirements may be basic input-output types of measurements – e.g. program participants, activities, and reach. Or, they might be financial reports detailing how grant funds were used. While these metrics are helpful to funders, they don’t necessarily measure the initiative’s impact or provide useful feedback for learning within the collaborative.
It is important to balance what will best serve the broader initiative and what is in alignment with funding requirements. More and more, funders who prioritize collaborative work (including collective impact models) are amenable to reporting focused on outcomes more that outputs. A dynamic backbone organization may help funders understand this type of bigger-picture reporting by demonstrating the value of an iterative learning process, focused on strategy improvements in service of a deeper impact. For more information about working with funders to help them understand evaluations for learning and improvement, see our toolkit on Strategic Learning.
Rather than wait until the end of a program to discern results, integrated evaluation and learning provides opportunities for the backbone to consistently and effectively measure the progress while maintaining a focus on the meaningful change they are seeking to cause. One concern noted by both funders and leaders of social movements is the fear that more traditional evaluation can hinder the progress of movement building, reinforcing traditional power imbalances between funders and grantees. Further, traditional evaluation can impose a one-dimensional standard for measuring a process of change that is multidimensional in nature. This is particularly true in the context of collaborative efforts, which bring together multiple sectors and diverse sets of stakeholders to address complex problems. Applying traditional evaluation methods to a non-traditional setting, means evaluators risk using the wrong metrics, indicators, and measurement strategies, consequently misrepresenting the progress of the movement and risking undermining continued funding and support.
Designing and implementing evaluations that are effective at reporting impact, but are also adaptive and flexible, is a key step in preparing in advance for the reality that activities of collaborative efforts will change as their external environments change. As the strategic backbone, you have an opportunity to shape the evaluation, help find and onboard evaluators, and ensure the evaluation conversation stays open with your funder so their expectations can align with your initiative’s needs. You can also take a leadership role in helping learning happen along the way, rather than waiting for evaluation reports to surface every six months or so.