Each year, the Research on Evaluation TIG awards one Student Research on Evaluation Award, to an outstanding student presentation at the American Evaluation Association Conference. Congratulations to our recent award winners! Matthew Swope (2017 Winner)
“Learning Through Engagement: Using Stakeholder Epistemology to Enhance Credibility”
A critical task in many evaluations is determining what kind of evidence stakeholders will see as credible. The present paper offers evidence suggesting that evaluators can efficiently gather preliminary information about stakeholder epistemology to inform evaluation design and ultimately impact evaluation use.
The paper presents evidence from two sources: (a) a case study of an evaluation of a SAT preparedness program in which a mixed method approach was successfully employed to understand and utilize stakeholders’ epistemic views in the evaluation design, and (b) a Mechanical Turk study in which the epistemic views of 300 respondents were measured and shown to relate to method preference and willingness to act upon evidence in the context of program evaluation.
In sum, these studies suggest that stakeholder epistemology is a worthwhile dimension of exploration for evaluators, particularly at the outset of a participatory evaluation. Implications for evaluation design and the maximization of evaluation use are also discussed.
Melissa Goodnight (2016 Winner)
“Design Influence: Investigating the Global Momentum Behind India's ASER Model of Evaluation"
The Annual Status of Education Report (ASER) is produced via a pioneering large-scale evaluation of primary education in India. The process underlying ASER, which includes village asset mapping, school surveys, and household-based learning tests, relies on the participation of thousands of volunteers and hundreds of partner organizations every year. Based on a 10-month ethnographic study of ASER across three states, this presentation addresses how ASER reflects design influence. This is the influence of an evaluation’s entire concept—from its goals, to its capacity-building components, participatory structure, data collection instruments, and dissemination strategies. Evaluation designs like ASER can ultimately influence individuals and institutions not involved in the initial evaluation because of the design’s innovativeness in addressing planning needs. These designs are elevated as models. Within the international policy-shaping community, ASER’s approach has become a solution for closing the global data gap on learning and primary education universalization.
Julia Lamping (2016 Winner)
“Ethical Dilemmas and Obedience to Authority: Examining Evaluators' Ethical Decision Making"
The proposed presentation examines the effect influential stakeholders have on the choice to make unethical decisions in evaluation. Previous studies have used self-report surveys to document ethical dilemmas during evaluation projects (Morris, 2007; Morris & Clark, 2013), but the effects of unethical pressure from authority figures has yet to be studied in detail. Stanley Milgram’s (1978) research on obedience to authority found that “every-day Joes” will obey unethical requests proposed by an authority figure 65% of the time. Evaluators are sometimes approached by stakeholders to engage in unethical behavior that potentially crosses the boundaries of the standards adhered to in the field. Utilizing a situational judgement test, ethical dilemmas were presented to current American Evaluation Association members in order to examine ethical decision making. This presentation will explore the results of the investigation, as well as what the findings mean, practically, for evaluation practitioners and future research in the field.
Estelle Raymondo (2015 Winner)
“Watching the Watchers: A Multi-Method Study on Evaluation Systems and Organisational Change in International Development"
Many in the development cooperation community have subscribed to the use of evaluation as a socially-acceptable way to learn from past practices and to enhance accountability for delivering results. However, in the growth of evaluation also lies a paradox: while the evidence on “what works”' in development is steadily growing thanks to evaluation, the evidence supporting evaluation's own effectiveness in promoting programmatic and organizational change remains limited. The two questions motivating this research are: i) What effects does the quality of Monitoring and Evaluation have on World Bank projects’ performance? and (ii) Why do some evaluation systems influence learning and accountability while others do not? This piece of RoE is examplary in two ways: it systematically investigates factors that contribute to evaluation influence, and it leverages a unique mix of quasi-experiemental design (Propensity score matching) and case-based methods (QCA) to contribute valuable insight to the research on evaluation influence.
Tiffany Smith (2014 Winner)“Reflective Practice: Where does the Field of Evaluation Stand?”The current study provides insight into the state of the field of evaluation regarding practitioners' understanding and application of reflective practice (RP), one of six Essential Competencies in program evaluation identified and discussed by Stevahn, King, Ghere, and Minnema (2005). Specifically, the purpose of this study was to determine how professional evaluators view RP, the extent and manner in which they use it, and how evaluators perceive whether RP efforts affect, if at all, their evaluation practice. Through a snowball sample, 20 participants took part in an hour long interview. Preliminary findings suggest that practicing evaluators believe that RP is a process of questioning, thinking and learning that requires multiple perspectives and collaboration. Most participants believed that RP is both an intuitive and purposeful effort and some had specific tools that they used to guide the process (i.e. journaling, metaevaluation).