Coffee Break

AEA Coffee Break Webinars (CBD)

We're pleased to offer a series of short coffee-break-length demonstrations of 20-minutes each that introduce tools of use to evaluators.

Via this site, you can view and PREregister for upcoming webinars (below) or sign in and go to the Webinars eLibrary to download archived recordings of previous Coffee Break Demonstrations.

If you are not a member, and would like to participate in the Coffee Break webinars, we offer multiple membership options that will allow you to register for any Coffee Break webinars that you wish for one year from the month of payment. The regular e-membership is $85 and gives you access to each coffee break webinar for free.  This is discounted to $30 for full-time students. These e-memberships include electronic only access to four journals. To upgrade and get one-year access to all of AEA's coffee breaks, please click here to join or contact Zachary Grays in the AEA office at zgrays@eval.org.

Upcoming Coffee Break Demonstrations

Thursday, May 11, 2017 2-2:20 PM

“I Didn’t Know What I Didn’t Know”: Retrospective Pretest/Posttest Design in Teen Programs - Jill Young 

Evaluators need more design options to meet the challenges they face when trying to evaluate the effectiveness of a program. Researchers have offered the retrospective pretest/posttest design as a remedy to curb response-shift bias and better estimate program effects, but few studies have used this approach with youth. Response-shift bias occurs when survey respondents overestimate or underestimate themselves at pretest because they do not have an adequate understanding of the construct on which they are evaluating themselves.

After School Matters, a Chicago nonprofit that provides afterschool programs to teens, tested the retrospective pretest/posttest design using a mixed methods approach to determine whether response-shift bias existed for teens in the program, and if so, why. The study also examined the cognitive processes teens used to complete retrospective pretest questions. This presentation provides an overview of the study, including findings and practical recommendations for internal evaluators.

Attendees will learn how to:

  • Describe the cognitive process youth utilize to complete surveys
  • Identify self-reported biases and how they affect program impact estimates
  • Apply the retrospective pretest/posttest design in youth programs

Jill Young has over 10 years of experience in nonprofit research and evaluation. She is currently the Senior Director of Research and Evaluation at After School Matters in Chicago, Illinois, where she leads research and evaluation efforts for 25,000 after-school and summer program opportunities for teens each year. Previous to After School Matters, Ms. Young worked as a statistical analyst at University of Chicago and as a research manager at Northwestern University. She graduated from Drake University with honors, earning her BA in journalism and mass communication. She earned her MA and PhD in research methodology from Loyola University Chicago.

 

Join this presentation by registering here.

 

  

Thursday, May 18, 2017 2-2:20 PM 

Facing Ourselves in the Field: Action Steps on How to Combat Personal Biases in Evaluation

Despite our best efforts to minimize our personal biases when conducting external evaluations, evaluators will always view a program and its evaluation through particular lenses. While fully succumbing to our personal perspectives may prove problematic when developing, conducting and reporting on an evaluation, acknowledging the existence of our biases is valuable. Through this presentation, I will present strategies I’ve used to address and combat personal biases when collecting data.  I will also address some myths behind things you may lose by being just an objective evaluator.

Learning Objectives:

In this presentation, I will address the obstacles and present strategies on how to overcome them.  I will address how to combat biases and address our positionality while in the field.  I will also talk about creating a historical and contextual profile of what, who, and where you plan to conduct your evaluation. Finally, at the end of the presentation, I will address the importance of checks and balances, peer review, and reflection journals. 

Dr. Davis holds the position Research Assistant Professor at the University of North Carolina at Chapel Hill.  She currently teaches a capstone course for undergraduate students in the Department of Public Policy where students conduct research projects for clients.  In addition to teaching, Dr. Davis conducts statewide program evaluations for the Education Policy Initiative at Carolina (EPIC).  Here she holds the title as the lead qualitative researcher and principal investigator for the Gaining Early Awareness and Readiness for Undergraduate Programs in North Carolina (GEAR UP-NC) evaluation.  In addition to her work at EPIC, Dr. Davis completed a Global Education program in Bavaria, Germany where she taught a seminar on the globalized effects of dropouts, consulted for the Minster of Education on topics regarding integrated classrooms, and was invited to speak at the University of Porto concerning the effects of summer reading loss for elementary students.  Dr. Davis’ areas of interest include Education policy, program evaluation, qualitative research methods, and racialized gaps in schooling. 

Join this presentation by registering here.



Tuesday, June 6, 2017 2-2:20 PM 

Emerging Trends in Evaluation - Beverly Peters

American University’s Measurement and Evaluation program has identified several key trends in evaluation pertinent to its students and evaluators alike. In this second of two Coffee Break webinars, Beverly Peters discusses these trends, which will be elaborated upon in a series of free webinars organized by American University in 2017. Topics include:

  • Evaluating Performance in Disaster Management Measuring performance in disaster management can lead to evaluation of several aspects, including prevention, preparedness, deterrence, and response. A case study from the 2015 Amtrak disaster shows the metrics the city of Philadelphia used to measure its performance related to these aspects. (Samantha C. Phillips, MA, MPH Evaluating Performance in Disaster Management)
  • Complexity Aware Monitoring How an evaluation is designed depends on a number of factors, including the complexity of the project itself. Using a Cynefin framework is one way to flesh out potential complexities and evaluation designs towards developing an appropriate evaluation approach to a project. (Kirsten Bording Collins, Founder and Director of AdaptivePurpose)
  • Evaluating Programs in Complex Environments: The Case of Afghanistan Evaluating programs in complex environments presents a unique set of challenges. The case of Afghanistan highlights these challenges as they relate to human resources, security, data, methodology, and reporting. (Mitch Teberg, M&E Specialist)

In AU’s series of free webinars, experts will discuss these emerging trends, relating these to case studies in their fields. Webpage with webinar registration links: http://programs.online.american.edu/msme/webinars

Beverly Peters is the Director for American University’s online Measurement and Evaluation programs. She has more than twenty years of experience teaching, conducting research, and managing community development and governance programming in Africa. She writes an online blog series in Qualitative Methods in Monitoring and Evaluation for American University. 

 

Join this presentation by registering here.

  


Thursday, June 8, 2017 2-2:20 PM 

Strategies and Tools for Democracy, Rule of Law, and Governance Research - Cat Kelly 

In this Coffee Break, Cat Kelly will discuss the strengths and weaknesses of several types of qualitative methods for conducting applied democracy, rule of law, and governance research: comparative case studies, focus group discussions, semi-structured interviews, and surveys with qualitative components. Beginning with an overview of how both research and evaluation can use social science theories about development to root an organization’s use of particular methods and specific tools for analysis, the session continues by touching on strengths and weaknesses of various methods and tools, and ends by presenting tips for deploying rapid versions of these methods and tools in practical, policy-oriented (as opposed to academic) settings. The presentation relies on several examples of tools and methods used at the American Bar Association – Rule of Law Initiative to research, monitor, and evaluate programs.  

 Learning Objectives:

  • Learn how to articulate why having a clearly articulated theory about any aspect of democracy, rule of law, and governance that one is researching is essential for making the right choices about which tools and methods to use for analysis;
  • Evaluate the strengths and weaknesses of various qualitative methods and their associated tools for conducting program and policy relevant democracy, rule of law, and governance research
  • Identify ways that practitioners can adapt these tools and methods – which are often used by academics in longer form for multi-year research initiatives- to be tailored for the much more rapid use that they require in democracy, rule of law, and governance programming.

Dr. Catherine Lena Kelly is a Mellon / American Council of Learned Societies Public Fellow, working in the Research Evaluation, and Learning Division of the American Bar Association - Rule of Law Initiative. She brings years of research and evaluation experience in sub-Saharan Africa to her practice, as well as doctoral-level training in qualitative and quantitative methods for research on democracy, rule of law, and governance. Fluent in French and proficient in Wolof, she holds a Ph.D. in government from Harvard University and has consulted or worked for organizations including Freedom House, the University of Cape Town’s African Legislatures Project, the International Budget Partnership, and the U.S. Department of State.  She has conducted nearly two years of field research for academic projects in Senegal, Ghana, and Burkina Faso and has been a Fulbright Scholar at the Free University of Brussels, where she published research about foreign intervention in the Democratic Republic of Congo. Her work has been published by ABA ROLI, by academic outlets like Comparative Politics and Journal of Democracy, and by blogs at The Washington Post, Council on Foreign Relations, and Social Science Research Council. 

Join this presentation by registering here.

 

 

Tuesday, August 15, 2017 2-2:20 PM

Intro to Non-Technical Project Management - Mary Fairchild

While we may not call it a project, we are all doing project management of some form or another.  Come learn about the various steps in the project management cycle, some common terms, and a few tips to keep work moving and on task.

Learning Objectives:  

• Identify what we consider "projects" 
• Understand project management common terminology 
• Strategies to stay on task and keep the work moving 

 Mary Fairchild is a certified professional project manager who has spent her career applying these techniques within the profession of human resources.  She has worked in the tech industry for large corporation such as Microsoft and has her masters in Organizational Developmental Psychology.  Mary uses program evaluation in the corporate setting to measure the return on human resource programs such as mentoring, coaching, training, and more.  She is currently a consultant in this capacity: www.fairchildhrconsulting.com

 

Join this presentation by registering here.

 

 

 

 

NEW DOCUMENTS

Log in to see this information

Either the content you're seeking doesn't exist or it requires proper authentication before viewing.