As with any evaluation, you will need to collect data to help answer your evaluation questions. Unlike other evaluations, you will not plan all your data collection up front, but rather will identify new data to collect and ways to analyze as you go, as questions and needs change.
Data collection in the context of developmental evaluation is often qualitative, but not always. It can leverage a wide variety of methods, including many of the same methods used in formative and summative evaluation, and can be designed using many different frameworks.
At any given moment, you are likely to have multiple questions that you could be answering through data collection. It is important to thoughtfully prioritize which questions to answer and at what depth. You must weigh such things as the extent to which answering the question can substantially advance the work, whether it relates to a major threat or opportunity, and the timing during which your partners might use the results to inform a decision.
Some questions will require much deeper investigation than others in order to answer. If a lower priority question can only be answered with in-depth data collection, chances are it is not worth your time. If it requires fairly minimal data to answer, it may be worth your time, but also consider how often you are bringing new information to your partners and asking them to pause in their work to consider it. Prioritizing is not just about your time, it’s about the time it takes your partners to participate in the learning process.
Once you have an evaluation question, three simple questions can help you decide how to answer it.
Question 1: Does data already exist to answer this question?
Sometimes there is secondary information already available to help answer the question. Look for such things as existing documentation, surveillance data, program data, economic data, demographics, or other secondary sources.
Question 2: What framework can help me think about this question?
Rather than relying on your past experience and instincts about the best ways to answer the question, it is often helpful to leverage frameworks about how the world works, how people interact, how services are deployed, and how systems work to help identify the right questions to ask and the right way to analyze the data.
For example, if your evaluation question asks, “How does the structure of the collaborative affect the types of decisions being made?” you might want to think about how power dynamics and levels of involvement in decision-making affect the types of decisions being made. Leveraging frameworks on both of these topics might help you to identify questions to ask and issues to look for during observation. The frameworks might also help you think about your analysis differently, leading to an exploration of how decisions made at different levels of involvement are more or less influenced by the positional power of participants, something that would most likely not surface intuitively.
In other words, frameworks push us beyond the limitations of our own experiences and increase our ability to understand what is occurring.
Question 3: What methods can help me answer this question?
It helps to have a large toolbox of methods to reduce the risk of defaulting to the same old, same old even when the same old is not the right tool. For example, rather than defaulting to answering a question through yet another key informant interview, remember that a key informant interview is just a data collection technique – how you design that protocol and analyze the data is the “method.”
There will be times where your toolbox does not have the right method available. In the context of DE, you rarely have time to learn an entirely new method, so it is very beneficial to have a network of researchers available that you can tap into as needed.
In the fluid context of DE, it can be difficult to know when you have “findings” ready to share. Instead of findings being determined by a pre-defined point in your evaluation plan, you must determine whether it is the right moment to share what you’ve learned.
You can use the four criteria below to help make this decision. They don’t all apply equally in every setting, but they do provide some rough structure when applied sequentially.
Criterion 1: Rigor – Do I trust what I’ve learned?
You are likely to have greater trust in what you’re learning if you have multiple sources of evidence and have taken time to think about how your own biases feed into your conclusions. Systematic analysis of the data can also help you better trust what you’re learning.
Criterion 2: Perspectives – Will my partners trust what I’ve learned?
Your partners may be more likely to trust findings that have substantial evidence behind them and that represent a wide variety of perspectives on what is occurring. When you’re first building their trust, you may need a great deal more evidence than later in an initiative.
Criterion 3: Useful – Is what I’ve learned important to anyone other than me?
Not everything you learn is equally important. Sometimes, something that is compelling and interesting to you is not particularly useful in the context of the decisions being made by your partners. You want to avoid using their time to tell an interesting story that won’t feel useful or meaningful to the group.
Criterion 4: Timely – Is what I’ve learned relevant right now or very soon to my partners?
If there is no practical use for the information in the short-term, you should wait to share the findings until they are useful. You can use the additional time to continue to investigate the issue in more depth.
Breaking the Rules
There may be times where you want to share something that fails to meet even Criterion 1 – when you want to share a hunch, something that requires more investigation before it can be called a finding. If you choose to share it, it’s important for your partners to understand it’s not an expert insight or deep understanding they should run with, but rather a check-in to see if the hunch is worth investigating. This process is one of the things that distinguishes developmental evaluators from strategy advisors. While both might use systematic investigation to support their recommendations, developmental evaluators should always use a systematic approach.
The results of DE are rarely delivered through a formal report. Instead, the findings need to be useful and timely. Expect to share findings through formal and informal conversations along with memos, emails, presentations, etc. When deciding when, how, and to whom to convey the information, consider:
- What decision/action points are coming up where the information could be useful?
- Who is most affected by the issue you investigated?
- Who can use the information to make a decision or take an action?
- What type of information needs to be conveyed?
- Will visuals or tables help to make the information more memorable or understandable?
- What level of detail is needed to provide an accurate description of the findings?