The Farm to School Evaluation Toolkit provides a wide array of data collection tools. There are specific data collection tools for each FTS target audience that can be directly downloaded from the toolkit attachments.
Few, if any, of the existing tools will be perfect for your FTS evaluation. This part of the toolkit provides guidance on how to select the best existing tool, how to adapt it for your evaluation needs, and how to validate it. You will also learn about when to choose quantitative or qualitative data.
The first thing to do is decide what type of methodology you’ll want to use.
If you are collecting data from a lot people, surveys and questionnaires with categorical or scale answer options are ideal. Numerical data is common in evaluations, and for good reason. This type of data can be displayed in simple graphics, which are effective at conveying information to your stakeholders. The data can be easily categorized into different groups, allowing for comparisons across groups such as how parents versus teachers value the local food on the cafeteria salad bar.
Additionally, information reported out in numerical formats like percentages or graphs are often seen as more objective and therefore can provide more compelling evidence of program impact.
If you want to know the “Why” and the “How” a program is affecting a population, then you’ll need to collect qualitative data.
Conducting interviews or holding focus groups with participants in your farm to school program can yield rich information about people’s experiences and perspectives, elicit ideas, and uncover insights not previously considered. Qualitative methods, however, are time consuming to set up and implement, and the data (notes or verbatim transcripts) can be overwhelming to analyze.
Because qualitative methods cannot be conducted with a large number of people and the analysis relies on reading to find themes/key concepts, it is sometimes considered more subjective than quantitative data.
Data collection tools also fall on a continuum of qualitative to quantitative types of data. In the table below, the method called “participant observation” is on the far end of the qualitative methods and “surveys” are on the far end of the quantitative methods. However, there are a variety of other methods between the two ends, including open-ended interviews (which are more like a conversation, where your questions are unscripted and emerge as the discussion unfolds), focus groups, semi-structured interviews, document reviews, and observational tools with pre-defined scoring categories.
A Continuum of Methods
Let’s consider what this would look like in a farm to school evaluation of a school garden program curriculum.
First ask yourself: Do you mainly need to answer questions about “What” students are learning in the garden lessons OR do you need answers about “How” best students can learn in the garden lessons?
- The “What” questions can be answered through surveys – online or paper.
- The “How” questions can be answered by using qualitative methods such as observing students learning about plant cycles or interviewing students about their learning experiences.
Each method has its advantages and drawbacks. There is no “right” method; rather the best method will be dependent upon several factors you have documented in your Evaluation Plan, Step 1, including the purpose of your evaluation, the resources you have available, the target audience, and your timeline.
- Shackman, Gene (2009). What is Program Evaluation? A Beginners Guide
While there are many different types of data collection methods, the most common tools used in farm to school evaluations are: Observations, interviews, focus groups, and surveys. Each of these methods has its advantages and challenges. The publication, “Designing Evaluation for Education Projects” provides a nice summary of each of the four methods, described below.
Purpose of Observational Tools
To gather information about how a project actually operates, particularly about the process.
Purpose of Interviews
To fully understand someone’s impressions or experiences, or learn more about their answers to questionnaires.
Purpose of Focus Groups
To explore a topic in depth through group discussion, like reactions to an experience or suggestion, and understanding common complaints. Focus groups combine elements of both interviewing and participant observation.
Purpose of Surveys
To quickly and/or easily obtain a lot of information from people in a non-threatening way.
- NOAA Coastal Services Center (n.d.). Designing Evaluation for Education Projects
Not all methods are equally well-suited for all audiences. Below is a guide for thinking about how four of the data collection methods likely to be used in FTS evaluations are best suited for different audiences.
The above table shows:
- All four methods are well-suited for using with adults and teachers, whether they are involved directly or indirectly in your program.
- The best method to use with leaders is an interview. Leaders tend to be busy people and will be less likely to respond to an invitation to take a survey or join a focus group.
- When collecting data from diverse participants, ideally you would utilize someone from the community and cultural group of your target audience. You also need to insure your data collection instruments are culturally appropriate. The range in each of the cells illustrates that there are better and worse ways to implement the method.
- When it comes to students, data collection can be tricky. Observational techniques are good to use with the youngest children but not as good with older children. The presence of the evaluator or data collector is more likely to affect the behavior of older children than younger ones.
The table can help you think about the appropriate method for your audience, but remember that it is only a guide. Regardless of whether your selected method and audience match up on the above matrix, what still you want to consider how your audience will respond to a particular data collection method.
- NOAA Coastal Services Center (n.d.). Designing Evaluation for Education Projects
The Farm to School Evaluation Toolkit has seven attachments, one for each target group that farm to school programs could impact: students, parents, teachers, food services, producers, school leadership, and community. Each attachment includes evaluation outcomes relevant to that target group, the types of FTS program activities that would likely lead to that outcome, and data collection instruments appropriate for capturing data for the outcome.
Below is an example of Attachment 1: Outcomes for Students.
This example is outcome #2: Student gains in knowledge and awareness about agriculture, local foods and seasonality. The toolkit has two instruments that can be used as pre/post-test knowledge surveys. In addition to downloadable surveys, there is a short description of the instrument and information on where to find it if the tool is part of a larger document. Here we see the first tool is recommended for 5th and 6th graders. The second tool is appropriate for 6-12 year olds.
Each of the attachments link farm to school program activities to evaluation outcomes and provide the tools necessary to collect the data.
The data collection instruments in the toolkit are a great starting point for the instruments you will use in your evaluation. However, few existing instruments will be a perfect fit for your program. Adapting a measurement tool can seem daunting, but luckily straightforward processes and steps will make this a relatively easy task.
The first place to start is to determine if the evaluation question from an existing instrument fits your program. To do this, you can use a scoring table of criteria that is important for your evaluation. The scoring table below demonstrates how criteria can guide your selection of questions.
Remember, you have limited time and resources to conduct and analyze the evaluation data, so you want to gather input on your most important issues. Generally speaking, a question that meets many criteria is likely better than a question that only meets one criteria.
Tip: Create your own scoring table with the criteria that is important to your evaluation.
Crafting Good Questions
There is a lot written on what is and is not a good question. We will consider several components of a question.
First, is the wording of questions, which can be more challenging than it initially seems. When reviewing an existing question or writing your own, consider these three things:
- The particular people for whom the questionnaire is being designed (questions need to be appropriately worded for your target audience);
- The purpose of the data you are collecting; and
- The order of questions in the questionnaire.
In all cases, use simple wording—not to talk down to your participants, but to make sure you are not introducing any confusion or potential misunderstanding. Simple words and simple sentence structure are important.
Some Rules of Good Survey Questions
- Be specific. Do not use terms that can be defined in multiple ways. Instead, state exactly what you mean. If you are asking a question about how often a child eats vegetables in a day, do not make your choices, “never” “a little” “average amount” “a lot” – these will mean different things to different people. Instead, be specific: “zero” “1 time a day” “2-3 times a day” “4 or more times a day.”
- Do not ask demanding questions: Requesting information that requires a long answer, or asking many open-ended questions in a row can discourage people. Respondents may skip over them, or worse still, they may just opt out of the entire survey.
- Use mutually exclusive categories. Make sure that only one answer is possible. In cases where more than one answer is possible, make sure you let the audience know that they can choose more than one.
Existing Instruments as Templates
Some existing instruments may have specific questions relevant to your program, and others have types of questions or survey formats you believe would work well in your evaluation. Adapting a survey instrument can mean swapping out words, like a list of local foods on one survey instrument that may not match your local foods. But adapting can also be about using the ideas represented in the instrument.
Above is a Student Knowledge Survey. This is a great example of a survey format that is well-suited for a pre/post-test of FTS nutrition education program because it can assess gains in knowledge. If this seems like a promising instrument, you would need to check if the curriculum for the activities you are evaluating teaches the information referred to in each question. While you are looking through your school’s curriculum, you could create additional questions using a similar style as the ones on the survey.
- MEERA. Set Goals and Indicators
FTS Evaluation Toolkit Attachments
Once you have the questions selected and adapted, it is time to validate the survey for your target group. Simply put, you need to test your instrument before you begin collecting data.
No matter what type of data collection tool you use, and even if you are using a tool that you did not need to adapt, you still need to test its validity with your population. The testing stage is also about determining the ease of its administration with your data collectors (if you are doing interviews, holding focus groups, using an observational data collection tool).
Pretests uncover these issues:
- Does each question measure what it is intended to measure?
- Do respondents understand all the words?
- Are questions interpreted the same way by all respondents?
- Does each close-ended question have an answer choice that applies to each respondent?
- Does the questionnaire motivate people to answer it?
- Are the answers choices correct?
- Does any part of the questionnaire suggest bias on your part?
Steps for pretesting:
- Evaluation team members “take it,” especially those who have not been involved in the development/adaptation of the instrument. This step will uncover issues that warrant revision of the instrument before testing it on the target audience.
- People similar to your target audience “take it.” With your target audience, simulate the data collection procedure to be used. If it is a telephone survey, test it as a telephone survey. If it is a web-based survey, have your target audience test it online.
- Obtain feedback about the questions. For each question, obtain feedback from your testers. Ask them questions based on the seven issues listed above. If they are taking the survey online, you can add text boxes at the bottom of each page for feedback. After the testing is over, remove the tester feedback boxes from the online survey.
- Assess if questions are producing the information you seek. Examine the answers collected from the testers. Do their answers make sense? Are the questions eliciting the information you were after?
- Revise! If you had to do a lot of revisions, you should repeat steps 2-4. If not, revise and seek input on the revisions from your evaluation team.
Tip: It is the RARE data collection instrument that does not need to be revised after it is pretested. Don’t despair if people are pointing out problems. This is the purpose of a pre-test!
In second section of the Toolkit Evaluation Plan Template, you will document the choices you have made regarding data collection.
The table below lists out general steps that everyone on the evaluation team could do. On your evaluation plan, you can have as much or as little detail for each step as you like. So for example, you may want to add a step before selecting the instrument that is “Create question scoring criteria.”
Webinar #3: Choosing and Adapting Tools covers types of methods and their advantages and challenges, ways to assess the fit of a question for your evaluation, how to adapt an existing measurement tool, and how to pretest the tool. This webinar focuses on Steps 3 & 4 in the Overview & Steps Guide. Several handouts are used during this training: the Overview & Steps and the Evaluation Plan Template and at least one of the Toolkit Attachments. This is an interactive training. By the end, you will have learned how to complete an evaluation plan, specifically identifying evaluation steps, timeline, and key staff responsible for the evaluation tasks.