Welcome to the Design & Analysis of Experiments TIG

The purpose of the Design & Analysis of Experiments TIG is to provide an active forum for AEA members who specialize in or have special interest in experimental evaluation research to connect and advance the methods and practice of this segment of the evaluation field.  Evaluations that use an experimental design (with randomization of treatment and control units) are distinctive and involve their own methods, practices and analyses, as well as many that connect to other TIGs’ foci.  We hope to engage members from other relevant TIGs in order to have cross-fertilizing discussions. 


Specific areas of scholarly discussion for this TIG include:

·      integrating randomization into program operations

·      how to analyze experimental data (ITT, ATE, TOT, LATE, CACE, moderator effects)

·      design strategies for getting inside the black box

·      analytic strategies for getting inside the black box

·      opportunities to explore mediation with experimental data

·      power analyses/considerations (e.g., re: differential effects, in clustered-designs)

·      opportunities/challenges from multi-site experiments (when pooling is/is not justified; ICCs and power implications; extending multi-level modeling to experimental data)

·      within study comparisons (under what conditions do varied quasi-experiments reproduce experiments’ results)

·      handling missing data and attrition (when differential T-C attrition is a fierce enemy)

·      the external validity of experimentally-designed evaluations

·      low cost experiments


Consider becoming active in the TIG in order to shape its future and contribute to our field.


Events, Activities & Opportunities

This past Winter (2016-17)...  The TIG hosted two week’s-worth of postings to the AEA 365 blog (http://aea365.org/blog/). in each of December 2016, we focused on hot topics in experimental evaluation research (see here: http://aea365.org/blog/experiments-tig-week-allan-porowski-on-how-to-make-your-chances-of-conducting-a-successful-rct-seem-a-little-lessrandom/); and in January 2017, we focused on common objections to experiments and why we should not be so concerned (see here: http://aea365.org/blog/experiments-tig-week-the-ethics-of-using-experimental-evaluations-in-the-field-by-laura-peck-and-steve-bell/)


AEA17... We have a great set of sessions, click here (http://www.evaluationconference.org/p/cm/ld/fid=505), and select the Design & Analysis of Experiments to see the full list.


Attend the Business Meeting, Thursday, November 9, 2017 at 5:15PM in Marriott Balcony A.


Coming up next Spring...  Start thinking now about what you might propose to AEA 20178  The more proposals we receive, the better program we can assemble.  Thank you! 

About the Leadership

Laura R. Peck , TIG Co-Chair, is a Principal Scientist at Abt Associates and has 20 years of experience evaluating social welfare and employment policies and programs, both in research and academic settings.  A policy analyst by training, Dr. Peck specializes in innovative ways to estimate program impacts in experimental and quasi-experimental evaluations, and she applies this to many social safety net programs. Prior to joining Abt in 2011, Dr. Peck had been a tenured Associate Professor at the Arizona State University School of Public Affairs, where she taught public policy analysis, program evaluation, and research methods.  At Abt Associates, Dr. Peck is the PI, Co-PI and Director of Analysis for several major national evaluations for the U.S. Departments of Health and Human Services, Labor, Agriculture, and Housing and Urban Development.  Co-author of a public policy text-book, Dr. Peck is well-published on program evaluation topics. Dr. Peck served as Associate Editor (2009-2013) for the American Journal of Evaluation.  She earned her Ph.D. from the Wagner Graduate School at New York University. / Laura_Peck@abtassoc.com




Keith Zvoch, TIG co-Chair, is an Associate Professor in the Department of Educational Methodology, Policy, and Leadership at the University of Oregon (UO). Dr. Zvoch has over 15 years experience designing and conducting evaluations of educational and social service interventions. At UO, Dr. Zvoch teaches advanced research design and multilevel, multivariate statistics courses. His research interests include the measurement and evaluation of treatment fidelity, the modeling of time series data, and causal inference in applied field settings. Dr. Zvoch is currently an Associate Editor for the American Journal of Evaluation. He has published extensively in education, evaluation, and child development journals. / kzvoch@uoregon.edu