TIG Sessions 2021

AEA Conference STEM TIG Sessions 2021

Eval21 reimagined banner image

STEM TIG Business Meeting

We will hold elections for open positions on the TIG leadership team and have a presentation by Stephen Porter, titled Understanding and addressing potential harm in evaluation practice.

Understanding and addressing potential for harm in evaluation practice

  • Stephen Porter
  • Monday November 8, 4:30-5:30pm EST

Professional evaluators work with others making choices on what gets counted and what counts, and in doing so directly and indirectly affect peoples’ lives for better or worse. Evaluation processes can include or exclude key issues such as gender disparities, they can suggest the allocation of resources and they can report upon people in a manner that recognizes or minimizes cultural assets. Evaluators have the power to address and potentially cause harm through their work. Evaluation practice is at a moment where root and branch change is being undertaken: underlying practices are being questioned; approaches are evolving; and new practice arising. From the equitable evaluation initiative, to footprint evaluation, in Africa and during AEA and EES conferences evaluators are defining approaches that better account for equity, environmental sustainability, culturally responsiveness, transformation and interacting with complex systems. A key issue that needs further attention in these discussions is how evaluation can address harm in itself and in the work it assesses. Becoming more explicit about developing a harm lens can reinforce the evolution of evaluation by helping to put in place systematic reflection on where we may fall short on our professional responsibilities and become more consistent with aspirations to be helpful change agents. This session discusses these issues and highlights ways in which evaluation professionals are adapting their practice to address harm.

Meeting the Moment through Formative Evaluation during COVID-19 Pandemic

  • Sara Allan
  • Monday November 8, 12:00 PM - 12:45pm EST

This case study discusses the evaluation of a Summer 2020 professional learning program for elementary science teachers. Changes in school operations during the COVID-19 pandemic, as well as a nationwide uprising for racial justice, called into question whether the program, as previously designed, remained relevant. Evaluators responded by shifting from a primarily objectives-based evaluation to a utilization-focused evaluation; they adopted a formative evaluation plan to identify participant needs and implemented structures to facilitate real-time reporting during program implementation. This change elevated the voices of the participating teachers and shed light on their need for support on technology-use in virtual classrooms, students’ socioemotional learning, and racially-conscious science education. Program leaders iteratively redesigned sessions based on feedback from evaluators. This case illustrates one opportunity that evaluators took to support program leaders in meeting the moment and raises questions about how to maintain a balance between adaptability and objectivity in times of crisis.

Measuring Success, Defining Success: Using Most Significant Change Method to Evaluate an Elementary Science & Engineering Pilot Program

  • Maia Elkana
  • Tuesday November 9, 3:30 to 4:15pm EST

The 2020 shift to remote learning created turmoil for students and teachers, erecting logistical barriers to testing. These factors upended many K-12 STEM program evaluation plans and exposed simmering tensions between stakeholders intent on increasing achievement scores and educators’ discomfort with reliance on summative assessments. This presentation details systematic use of the Most Significant Change (MSC) qualitative method to evaluate a pilot K-5 STEM classroom project tailoring the mySci curricular program to the needs of a large, urban school district in partnership with a community-based non-profit and corporate foundation sponsorship. More commonly used in public health and international development evaluation, MSC uses structured story collection and review to examine hard to quantify impacts while facilitating the relationship building vital for success and sustainability. Rigorous qualitative evaluation techniques, such as MSC, can bridge gaps between funders and teachers while building stakeholder buy-in across the complex social systems in schools and districts.

Flexing as needed: Incorporating developmental evaluation in Higher Ed STEM education and training

  • Michele Walsh
  • Wednesday November 10, 2:30 to 3:15pm EST

This session draws from three very different projects to illustrate how developmental evaluation principles can be used to support innovation in STEM education and training in higher education contexts. Incorporating developmental evaluation in a context such as Higher Ed STEM that traditionally values controlled designs and summative approaches can be challenging. However, developmental evaluation can provide insights during complex and changing situations, including the COVID-19 pandemic and resultant disruptions across every and all systems. Although none of the evaluations was initially conceived of as developmental, the evaluators will address the features of each project that led to the adoption of a developmental evaluation framework and how each approach led to learning, both for the project staff and the evaluators. Although these papers are situated in STEM fields, the discussion of developmental evaluation approaches is broadly relevant.