PME 802 - Module 3B (Steps 5, 6, & 7)
Step 5 - Data Collection & Analysis Methods
The data collection for this evaluation is a combination of qualitative and quantitative approaches. A variety of data collection methods is most beneficial because "different methods reveal different aspects of the program." (Taylor-Powell & Steel, 6). Furthermore, the CDC writes that "employing multiple methods (sometimes called “triangulation”) helps increase the accuracy of the measurement and the certainty of your conclusions when the various methods yield similar results." (CDC.gov).
Literacy & Numeracy Checklist:
"Checklists are used to encourage or verify that a number of specific lines of inquiry, steps, or actions are being taken, or have been taken, by a researcher." (Given, 2008).
One of the Beyond the Bell program goals is to increase literacy and numeracy skills. An important element would be ensuring the skills taught in the program align with the Ontario curriculum since the program is run through public schools in Ontario. A Beyond the Bells "skills checklist" can be cross referenced with the Ontario curriculum. Each student would have a personalized checklist for the grade they are in OR the grade level they are working at if they are on an accommodated IEP (Individual Education Plan).
An example of a numeracy checklist for a student working at the grade 3 level is linked below. All of the curriculum expectations for the "Number" strand are written out with a checkbox beside them. Program staff would check off each box as the student showed an understanding of the concept or a mastery of the skill. Click HERE to access the example checklist.
Surveys:
"You want information directly from a defined group of people to get a general idea of a situation, to generalize about a population, or to get a total count of a particular characteristic." (nwcphp.org)
Both classroom teachers and parents would complete a survey at the beginning of the program (pre-survey) and at the end of the program (post-survey). The survey would include information about the not only the child's literacy and numeracy skills but also their social skills, self-esteem, outlook, etc. It would also include information about the program reading individual goals (pre/post data). The pre-survey would include questions about what the parents/children want to gain from attending the program and the post survey would include questions asking if the program had met the goals.
The children (participants) would also complete a survey at the beginning and end of the program. This survey would include questions about how the child feels towards learning, literacy, numeracy, and experiencing new things (pre/post data on attitude). The survey would need to be completed with the assistance of a parent due to the young age of the participants.
Some survey questions, specifically those dealing with attitudes and engagement, would use a Likert scale approach (1-strongly disagree, 2-disagree, 3-neutral, 4-agree, 5-strongly agree). Others would require fill in the blank responses (ex: what are your hobbies/interests). This data could be coded or compiled on a spreadsheet to gauge frequency of responses among participants.
The pre-survey would allow program staff to gear activities towards the interests of the students (process evaluation - making changes in the moment), and the post-survey would allow for changes to be made before the next session (impact/outcome evaluation approach).
Participants, family members, and teachers would be encouraged to complete the survey in a digital format (ie: survey monkey or Google Forms) since the program automatically compiles data for analysis. However, paper copies would also be available as well. It it unethical to assume all families have access to the internet to complete an online survey. Especially given the fact that the program targets low-income students/families.
Examples of student, parent, and teacher pre and post surveys can be found HERE.
Interviews:
"You want to understand impressions and experiences in more detail and be able to expand or clarify responses." (nwcphp.org).
A group of children and parents would also be interviewed about their experiences in the program. An interview would allow for more detailed answers from open ended questions (Thompson) about attitudes towards the program and experiences in the program. It would be unrealistic to interview all participants and their family because of time constraints and availability, so, a random sample would need to be chosen. It would be important to have ALL participant voices heard (survey) but also have more detailed answers (interview). The interview would also provide an opportunity to find out if the program included activities based on the interests of the participants.
All Beyond the Bell program staff would also be interviewed at the end of the program and asked to reflect on their experiences. They would be asked to share specific anecdotal stories about successes as well as challenges they faced as a means of evolving and improving the program before the next intake began. This aligns with the outcome/impact evaluation approach.
Direct Source Primary Data Collection from Program Staff:
"One way to collect data; whether from clients receiving services from the non-profit, or from those running the program/services." (Thompson).
Throughout the program the staff would be tracking student data/progress through a variety of methods (anecdotal notes based on observation, academic work and other products created, conversations, checklists, etc.). The CDC refers to this as "document review". This data would be shared with the evaluators. The evaluators would need to analyze the data and determine its value in a final evaluation document. There is a possibility for bias to exist from the program staff to only capture positive learning experiences. This is why it makes the most sense for evaluators to look at the primary data that is collected.
Standardized Reading Assessment:
Participants would complete a reading assessment at the beginning and end of the program (pre/post data). The program staff could connect with each child's teacher to determine which reading assessment program being used in the classroom as well as what level is best to start assessing the student at.
The data collected from these sources would be analyzed and shared with the program staff and stakeholders. The information shared with the program staff would allow them to make changes to support participant growth if need be. This fits with the process evaluation approach. All of the information would be shared with stakeholders, but the pre/post data collection specifically would help the stakeholders to see the value in the program and continue to fund it in the future. The pre/post data would also identify areas where the program would need to change as well. This fits with the impact or outcome evaluation approach.
Report Cards
If parents are open to allowing access to their child's report cards these could also be used to track academic progress over the course of the school year. The progress report comes out in October/November which would establish an academic baseline. The term one report in January/February would act as a check-in to gauge any improvement, and the final report card issued in June would show evidence of academic improvement. It is important to note that report cards should only be used in conjunction with other data collected and should not be the only marker used for academic achievement.
Step 6 - Approach to Enhance Evaluation Use
It is important to remember that the Beyond the Bell program runs solely on financial donations from the involved school boards, community foundations, individual donors, and the Ontario Trillium Foundation. This fact makes data collection all that more important. If the results of the program aren't positive it makes financial contributions that much harder for donors to justify.
While the stakeholders mentioned above are interested in the data collected many would not necessarily have the time or resources to read through all of the information collected by evaluators. Furthermore, it would be difficult and costly to distribute the evaluator's data, analysis, and report to all funding partners. To combat this I think it would make the most sense for the evaluators to create a condensed "Cole's Notes" version of the report, like an infographic, that includes key facts and data and findings from the pre/post surveys. The full report could be linked to the infographic using a QR code or bit.ly link for the stakeholders who are interested.
Alkin & Taut shared that "...information does not become knowledge until it has been interpreted in some way." (pg. 2). Shula & Cousins's article reminds us that all evaluations are political in some way and "all findings must compete for attention even in the least complex of program or policy settings." (pg. 203). The challenge of evaluating a program like Beyond the Bell is that all stakeholders would want to see positive results. All of the program goals are based on children improving academically and socially. It is hard to see a program like this fail. This could potentially add stress and pressure to the evaluator since the stakeholders would want the program to be successful and have a positive impact on kids' lives. All that being said, it is still important that the positive and negative results are reported accurately. While positive stories would likely be most beneficial for publicity and promotional purposes, all data should be included in the condensed infographic as well as the evaluator's full report.
Furthermore, it would be important for the evaluator to have one main contact within the YMCA who is in charge of running the Beyond the Bell program. The evaluator could work closely with this person, ask and answer questions, discuss specific details and hopes for the evaluation, and provide additional information as requested. However, it is critical that the evaluator remain neutral in their analysis and reporting, regardless of the relationship they create with the YMCA contact.
The YMCA contact would also be able to distribute key findings and recommendations throughout the Beyond the Bell program staff. This person would also be the main stakeholder in terms of determining changes that would be made for the program.
It is my hope that this contact would also promote the evaluation with their staff in a positive way. That way program staff would be open and willing to work with the evaluator and would also feel supported by the YMCA contact when evaluation results are released.
Step 7 - Commitment to Standards of Practice
All of the information below is based on "The Program Evaluation Standards" on the oecd.org website. Drawn from: The Joint Committee on Standards for Educational Evaluation, James R. Sanders, Chair (ed.): The Program evaluation Standards, 2nd edition. Sage Publication, Thousand Oaks, USA, p.23-24; 63; 81-82,125-126 (see www.wmich.edu/evalctr/jc/)
Utility Standards
"The utility standards are intended to ensure that an evaluation will serve the information needs of intended users." (oecd.org)
- Stakeholders, program staff, participants, and families are all involved in the evaluation
- Several different data collection tools are used
- Data and the final report is shared with stakeholders and the public (since it is a publicly funded program)
- Data and the final report are organized and written in accessible language
Feasibility Standards
"The feasibility standards are intended to ensure that an evaluation will be realistic, prudent, diplomatic, and frugal." (oecd.org)
- All surveys and interviews would be voluntary
- Surveys would be accessible both online (via email, uploaded to Seesaw journal, or downloadable from the Beyond the Bell site) and in paper form for those without internet access
- Interviews at the end would be short, casual, and non-invasive
- Evaluation results would impact future funding and future planning of program activities
Propriety Standards
"The propriety standards are intended to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results." (oecd.org)
- Surveys and interviews would be voluntary with consent to share findings attached
- Parental consent for information collected from children
- Parental consent for data to be shared publicly
- Parental consent with child's image to be used in promotional material
- Parental consent for use of technology agreement
- Information shared publicly would not include names without consent (participant, staff, or parents)
- All data would be collected in a fair, safe, and accessible way
Accuracy Standards
"The accuracy standards are intended to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated." (oecd.org)
- Detailed data collection from a variety of sources (participants, family members, program staff, etc.)
- Both qualitative and quantitative data collected and analyzed
- Data collected from direct sources within the program
- Final report to include data driven results to continue or change funding
Sources:
Alkin, M.C., and Taut, S. (2003). Unbundling Evaluation Use. Studies in Educational Evaluation, 29, 1-12.
Center for Disease Control. (2012) "Program Performance and Evaluation Office. Step 4: Gather Credible Evidence." Available: https://www.cdc.gov/eval/guide/step4/index.htm
Chen, H.-T. (2005). A Practical Evaluation Taxonomy. In Practical Program Evaluation (pp. 44–70). SAGE Publications, Inc. https://doi.org/10.4135/9781412985444.n3
Given, L. M. (2008). The SAGE encyclopedia of qualitative research methods (Vols. 1-0). Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412963909
Northwest Center for Public Health Practice. (n.d.) "Data Collection for Program Evaluation." Available: https://www.nwcphp.org/docs/data_collection/data_collection_toolkit.pdf
Ontario SpeakUp. (n.d.). How to Analyze Research Data. In SpeakUp: You Are the Student Voice (pp. 48–55). https://onq.queensu.ca/content/enforced/557764-PME802102S21/Qualitative Data Analysis StAR_Toolkit.pdf
Shulha, L. M., & Cousins, J. B. (1997). Evaluation Use: Theory, Research, and Practice since 1986. Evaluation Practice, 18(3), 195–208. https://doi.org/10.1177/109821409701800302
Taylor-Powell, Ellen & Steele, Sara. (1996). "Collecting Evaluation Data: An Overview of Sources and Methods." Available: http://www.intergroupresources.com/rc/RESOURCE%20CENTER/Evaluation%20Tools%20for%20Racial%20Equity/Section%204/G3658_4.PDF
The Joint Committee on Standards for Educational Evaluation, James R. Sanders, Chair (ed.): The Program evaluation Standards, 2nd edition. Sage Publication, Thousand Oaks, USA, p.23-24; 63; 81-82,125-126 (see www.wmich.edu/evalctr/jc/) Available: https://www.oecd.org/dev/pgd/38406354.pdf
Thompson, Sierra. (2016). "YouTube: Data Collection Methods." Available: https://www.youtube.com/watch?v=NJ-gW6adQTc&t=7s
YMCA SouthWestern Ontario. (2020). "Beyond the Bell." Available: https://www.ymcaswo.ca/beyond-bell
Comments
Post a Comment