This poster discusses evaluation of a multi-district intervention to enhance lower elementary literacy instruction. Using a four-pronged strategy, the intervention aims to improve early-career teacher preparation and literacy instruction in 16 high-needs North Carolina districts; the initiative began in 2018-19 and continued into 2019-20. This poster describes the design and implementation of the evaluation processes for the intervention, which were both formative and summative and employed a mixed-methods approach (survey, interview, and administrative data). In addition to discussing key findings, we highlight two aspects of the process that are central to much K-12 education evaluation. First, we address iterative adjustments to evaluation design, rationale for adjustments, and trade-offs considered in revising evaluation design across program years. Second, we address strategies for rapid turnaround between multi-source data collection, analysis, and reporting; this is a common challenge in recurring programs for which evaluation results are needed to inform each cycle.
This poster has direct implications for both K-12 education and evaluation practice more broadly. First, it evaluates an initiative addressing teacher preparation, which is associated with numerous benefits including positive student outcomes (Clotfelter et al., 2006); the initiative includes research-supported practices, such as literacy-specific coaching, content-focused and collaborative professional development, and instructional resources (e.g., see Biancarosa et al., 2010; Duke et al. 2011; Garet et al. 2001). The initiative builds on the theory of teacher development that the complex work of teaching requires continued development of knowledge and skills beyond preservice training (Ingersoll & Strong, 2011). Additional information on the implementation and success of teacher preparation programs, particularly strongly evidence-based initiatives, can inform understanding of mechanisms to best enable student learning.
Second, this can inform evaluators looking at initiatives across multiple years/cycles by discussing adjustments to evaluation design across years. Evaluation designed for a program’s first year is, in a sense, a pilot evaluation – lessons learned during initial evaluation implementation can be used to make future shifts. However, adjustments to evaluation design across program years can also complicate longitudinal analyses and comparisons. We will address this balance by discussing the decision-making processes underlying changes made – and, those considered but ultimately NOT made - in our evaluation processes.
Finally, it addresses rapid turnaround from multi-source data collection (including original and secondary program-administrative data) to analysis and reporting. This is often a challenge when evaluating cyclical programs, particularly when evaluation results for one cycle are crucial to program implementation in subsequent cycles (e.g., when evaluation results from an academic year are needed to inform program implementation for the following year).
Through addressing these issues, this poster directly reflects this year’s conference theme of “How will you shine your light?” by examining challenging circumstances through a positive, learning-oriented approach. In particular, we highlight the opportunities these circumstances provide for creativity and critical thinking in evaluation practice. These approaches will benefit a wide array of evaluators within the K-12 educational arena as well as evaluators working with cyclical programs and/or multiple data sources. Further, through a mixed-method approach, our work shines light on participant voices, whose experiences directly inform program development.