As my cohort members and I presented our evaluation findings for our various programs, I was struck by the idea that we do not do enough of program evaluation in education. We are in the habit of purchasing and/or implementing new programs that have been developed and used successfully in other settings and thinking that their previous success will automatically make them effective for our division. However, there are multiple variables at play in these implementations. Different population demographics and sizes. Different levels of staff expertise, staff buy-in and implementation fidelity, and division support. Overall differences in the specific needs of the division. All of these things vary from division to division, making the idea of pulling a “canned” program like Accelerated Reader (AR) out of the box and having it successfully achieve it’s objective seem unrealistic.
Yet that is what we do in public education. Even though we are constantly touting the importance of quality assessment and data-driven decision-making, we do not typically take time to perform pre- or post- assessments on our current or prospective programs. Several of the groups in my cohort, including my own, determined that issues with their respective programs did not have as much to do with the program itself as with its implementation and the resources and focus given to it. The question then becomes whether these programs are being put into practice unsuccessfully by choice or because there are obstacles at the division or school level that are limiting the program’s effectiveness.
By making it a division-wide policy to perform some version of a Program Evaluation on programs they are considering using or a Process Evaluation after the second year of use on current programs, divisions could better ascertain if their programs are appropriate for addressing the given need, what resources are needed to make them successful, whether the division can realistically provide those resources, and what variables are currently causing problems with the success of the current programs. This would help divisions choose the most appropriate programs for their needs from the start, realistically explore what they need to do and provide (i.e. staff development, purchase resources, etc…) to make them successful, monitor if these steps are being taken with fidelity, and check the see if the program is having the desired effect.
In addition, an immediate benefit of these mini-evaluations would be a better understanding of the programs on the part of those who work directly with them. Given the time and financial constraints of public education, evaluations such as these would need to be performed in large part by the teachers and staff responsible for their implementation, possibly as summer “focus groups.” This would bring the evaluation to a grass-roots level that would serve to get a clearer understanding of current or potential issues from those directly responsible for using the programs as well as giving those primary users a deeper understanding of the programs, their intended focus, and the key elements for their success.
Ultimately, these programs represent an effort on the part of school divisions to reach out to students, often struggling students. So, an ongoing policy of evaluating programs and making sure they are as effective and appropriate as possible would be a valuable use of division resources in advocating for student needs.