~AEA Public Library

Testing Assumptions in Evaluation Capacity Building: Evidence to Inform Models & Enhance Practice 

11-19-2019 14:35

This panel presents findings from three empirical studies that sought to test and clarify some of the central elements and relationships that are present in our field’s guiding frameworks of Evaluation Capacity Building (ECB). Using Preskill and Boyle’s 2008 Multidisciplinary Model of ECB as an organizing framework, we explore three critical elements of ECB: motivation to engage in ECB, ECB leadership, and ECB learning and improvement outcomes. Research on ECB is a critical mechanism by which we embody the evidence-based decision-making we ultimately seek to foster through our ECB efforts, and it deepens our understanding of this complex organizational process. This panel brings empirical evidence to bear on long-accepted, but largely untested assumptions in the ECB space, utilizes a broad literature base to interpret and contextualize the findings, and translates these findings into both recommendations for future research and concrete strategies to enhance ECB practice.

Several evaluation capacity building (ECB) models have emerged to guide practitioners in the design and implementation of ECB (e.g., King & Volkov, 2005; Labin et al., 2012; Preskill & Boyle, 2008; Suarez-Balcazar et al., 2010). These models emphasize different aspects and relationships, however they share several common elements, including factors that motivate organizations to engage in ECB, organizational characteristics that influence ECB success, and ECB’s intended outcomes.

Current models adopt a macro lens of ECB that provide a foundation for our understanding of this complex organizational process. However, this lens lacks the specificity on the core components of ECB at a level needed to guide ECB design and implementation. For example, models describe leadership as an important factor that can bolster or inhibit ECB efforts, however no model provides enough guidance for practitioners to understand what leaders need to do to foster organizational commitment to ECB or how specific leadership behaviors can influence the degree to which ECB goals are achieved. Models also typically include motivation to engage in ECB in broad categories (i.e., “internal” and “external”), but do not identify elements within those categories that might affect ECB adoption. Additionally, many of the relationships embedded in our ECB models remain untested across organizations and program contexts. For example, our models include the assumption that ECB will lead to greater use of evaluation for continuous learning and program improvement, but there has been limited empirical research to support this notion.

As evaluators, we seek to consistent reflection on our practice, changing our methods as we gain knowledge of more effective techniques. It is critical that we turn this lens inward to deepen our own understanding of our practice for continual improvement (Christie, 2006). Research on evaluation is a mechanism by which we improve evaluation, and build towards a more impactful future, similar to the evidence-informed decision-making we ultimately seek to foster through our ECB efforts. Empirical studies on ECB that test our most fundamental assumptions make our models more useful and impactful in practice.

This panel will present findings from three mixed-methods studies that tested core elements of ECB frameworks. Our first study explored the motivations of organizations to engage in ECB using a sample of nonprofit agencies and expert evaluators. Second, we present a study that utilized a survey of evaluators and interviews with ECB experts and foundation leaders to test an ECB Leadership Theory of Change to better understand how ECB leadership influences ECB goals through organizational commitment to and participation in ECB. Lastly, we share the results of a statewide study of nonprofit and educational agencies that explored the degree to which different evaluation and learning capacities influenced evaluation practice and use of evaluation for program improvement. This panel is unique because it brings empirical evidence to bear on long assumed, but largely untested assumptions in ECB, and translates these findings into recommendations for future research and concrete strategies to enhance ECB practice.

Statistics
0 Favorited
6 Views
1 Files
0 Shares
4 Downloads

Related Entries and Links

No Related Resource entered.

Tags and Keywords

Attachment(s)
pdf file
Evidence to Inform Models and Enhance Practice   857 KB   1 version
Uploaded - 11-19-2019
This was a Presidential Strand multipaper panel at AEA 2019 (Minneapolis, MN) under the Organizational Learning & Evaluation Capacity Building Research on Evaluation TIG.