Once you’ve defined your evaluation question, you need to decide what data to collect and how you are going to collect it.
It is important to strike a balance between what is easy to measure and what information will be most useful. There may be things that are quite easy to measure, but don’t help you answer your evaluation questions. Keep in mind that “easy to measure” is relative and will be different for each organization depending on time, skills and resources. For example, it may be easy for your organization to track attendance at meetings or public forums, but you may also realize that attendance numbers alone don’t help you know if your forum resulted in audiences who were more likely to prioritize your issue, more likely to take action, or even learned something from the forum.
Some of these things that are part of answering your evaluation question, such as whether your audience took action after the forum, can be very difficult to measure. Instead, you may have to measure actions that people signed up to take on commitment cards as they exit the forum – a statement about intent, rather than documentation of actual actions taken.
Ultimately, you want to let your evaluation question(s) guide your selection of data collection tools. Once you have considered if something is measurable and if you have the time and resources to implement measurement, it is important to make sure the information you collect will be meaningful and useful.
In the next section on data collection tools, several data collection tools and templates are provided and described in detail. Many are general evaluation tools with which you may already be familiar, such as surveys and interviews. There are also advocacy evaluation-specific tools (e.g., bellwether interviews, intense period debriefs, intensity of partnership assessment, policymaker rating, and champion/constituent tracking) uniquely designed to collect information that is useful in advocacy settings.
The tools vary in the time, expertise and resources needed to collect and use the data, and the level and type of detail they provide. For example, a focus group can provide rich, qualitative information on a topic, but requires an experienced facilitator and it and difficult to schedule multiple individuals for the same meeting time. The results of focus groups can also be time-consuming to analyze. Tracking/logging, which results mostly in counts or other quantitative information, can be completed fairly quickly by most staff members, but may not yield very detailed or unexpected information.
All data has limitations and rarely is one data source sufficient to answer an important evaluation question. This is because questions are best answered by data that meets five key criteria (listed below), and few data sources meet them all. These criteria are designed to you help you balance the pros and cons of each individual method when selecting your data collection tools and to understand what combination of tools will be most helpful.
- The data is collected from different stakeholders, such as the implementers of the strategy, audiences of the strategy who are satisfied/engaged, and the audiences who are dissatisfied/disengaged.
- A lack of perspectives can leave gaps in what you learn, leading to an inaccurate understanding of what is really happening.
- The data is collected and analyzed without bias toward a specific result.
- Bias can be introduced by limiting perspectives, but it can also be introduced by the way a question is worded, the content of the answer choices, and the way in which responses are recorded.
- The data answers questions that are important for improving your strategy. The data only includes useful data – no data is collected merely because it is interesting.
- Advocacy work can often occur when time and resources are limited, so you do not want to spend time collecting data that will not be actionable for you or your organization.
- The data is high-quality, as accurate as possible, and draws on multiple sources of information. This is often referred to as “rigor.”
- Accuracy can be improved by piloting your tools (e.g., having others review and revise a survey or interview questions before collecting data) and training your data collectors (e.g., everyone observing is trained to do observations the same way).
- The data is collected, analyzed, and used quickly, before it becomes irrelevant to the strategy.
- Timeliness means you need the data to be collected, summarized, and used before it becomes irrelevant or the key decision-point that it can inform has passed.
Before you design your own data collection tools there are a few questions you should ask:
- What types of summarizing or analyzing will I be comfortable doing?
- Do I really need to include open-ended information (which can be more time and resource consuming to analyze)?
- Can I find a way to automate this aspect of data collection (e.g. using an online survey program)?
- How often will I need to collect and analyze this information for it to be actionable?
- How many people will be responding/participating, or how many events will I be observing?