The Evaluation Policy TIG is dedicated to creating, promoting, implementing, and studying policies that describe organizational standards and principles of evaluation practice.
--------------------------Upcoming Webinar: July 7 2021 at 2 pm EDT.
To RSVP visit: https://register.gotowebinar.com/register/3697262196065850123
The White House Office of Management and Budget recently issued guidance to federal agencies about implementing key provisions of the Foundations for Evidence-Based Policymaking Act. The newly-issued guidance provides the most detailed direction for federal evaluation policy and practice to date, highlighting key aspects of the need for capacity and funding, the critical role of senior leaders, and how evaluation is relevant to activities across programmatic, operational, and research needs.
The Data Foundation and the American Evaluation Association invite the evaluation and evidence communities to join a dialogue on July 7 at 2:00 p.m. EDT to learn about the new guidance, hear reactions from evaluation experts, and to identify opportunities to support implementation of the Evidence Act’s evaluation and evidence planning processes.
Featuring: - Lisa Aponte-Soto, Ph.D, Board Member, American Evaluation Association - Ruth Neild, Ph.D., Executive Director, Mathematica - Jen Hamilton, Ph.D, Vice President, Education and Child Development, NORC at the University of Chicago - Nick Hart, Ph.D., President, Data Foundation
Upcoming Webinar: What are evaluation policies? What is the intention for developing one?
May 12 2021
You can view the webinar recording and review the powerpoint here.
In 2019, the Foundations for Evidence-Based Policy Act (P.L. 115-435) directed federal agencies to write and implement guidance for program evaluation, also known as evaluation policies. The intent of these evaluation policies is to foster a more robust and sustainable evaluation function within federal agencies.
Recent scholarship on the topic indicates that when developing such policies, agencies rely on peer learning for ideas, insights, and advice on what to include. In this session we hope to stimulate additional peer to peer learning. We will begin by sharing highlights from two recent studies conducted on evaluation policy in the philanthropic and federal government sectors -- highlighting common, and less common, topics covered in existing policies. Then, we will highlight a case example from the Department of Homeland Security. The webinar will close with an open discussion with attendees to learn about the practices associated with developing and implementing evaluation policies and to foster additional research studies on the topic.
During this webinar, you will learn more about:
- The differences between evaluation policies, evaluation plans and learning agendas.
- Motivation and intent for drafting an evaluation policy
- Emergent research on evaluation policies across two sectors: philanthropic and federal government
- Dr. Rebecca Kruse joined the Department of Homeland Security (DHS) in 2020 as the Assistant Director for Evaluation for the Program Analysis and Evaluation Division in the Office of the Chief Financial Officer. Previously, she served as Evaluator for the National Science Foundation, Evaluation Director for the Army Educational Outreach Programs, and as a Principal Investigator or Evaluator for numerous federally-funded education and workforce development grants and contracts. In partnership with the DHS Evaluation Officer, she currently leads the Department’s development of evaluation and evidence infrastructure to implement Evidence Act Title 1. Rebecca is active in the federal evaluation community, as a CFO Act Agency Evaluation Officer Council member, a mentor and periodic presenter for OMB’s Evidence and Evaluation Community events, and as a working group member/co-author of OMB’s issued program evaluation standards and practices. Passionate about use and influence of evaluation, Rebecca brings principles and processes of human centered design, change management, and team process to her evaluation work. She will soon celebrate her first year as a member of AEA’s Evaluation Policy TIG.
- Dr. Leslie A. Fierro is a Senior Fellow with Claremont Graduate University and owner of Fierro Consulting Inc. Her work includes developing sustainable evaluation capacity in organizations and systems, collaborating with organizations to plan and implement evaluations that provide actionable insights, and fostering learning through traditional and non-traditional approaches on a wide-range of evaluation topics. She has conducted evaluations in a wide variety of settings including the public, private, and non-profit sectors. Her research in the area of evaluation spans topics such as evaluation policy in the federal government, evaluation capacity building in the public sector, evaluative thinking, and evaluation training. Leslie is the Co Editor-In-Chief of New Directions for Evaluation and the co-chair of AEA’s Evaluation Policy TIG.
- Dr. Alana Kinarsky is a postdoctoral scholar at UCLA’s School of Education and Information Studies, specializing in social research methodology. Her research focuses on evaluation policies, how evaluation can strengthen nonprofit organizations and foundations, and support for undergraduates with foster care experience. She is currently the co-chair of AEA’s Evaluation Policy Topical Interest Group and the event lead for the Oregon Program Evaluators Network. She also works as an independent evaluator based in Portland, OR.
- The discussion will be moderated by Dr. Esther C. Nolton, specializes in quantitative measurement, survey, and qualitative methodologies. She has also facilitated strategic planning, organizational learning, and evaluation policy processes in the federal government as an AEA Graduate Education Diversity Internship Scholar (2018-2019 “Tenacious 10” Cohort) placed in the Evaluation and Assessment Capacity Section at the National Science Foundation. Nolton is currently the Program Co-Chair for the AEA Evaluation Policy TIG; a AEA365 Blog contributor; and is an active volunteer with the Washington Evaluators. Nolton continues to be committed to studying social determinants of health and education; evaluation practice and policy; research and evaluation methodology; democratizing evidence utilization; and organizational behavior and processes. She recently completed her PhD in Research & Evaluation Methods with a secondary specialization in Health & Education Policy from George Mason University, and now currently serves as a Program Officer in the Evaluation and Analysis department at PCORI.
This webinar is organized by the Evaluation Policy Topical Interest Group (TIG) of the American Evaluation Association. It is being co-sponsored by Washington Evaluators and Oregon Program Evaluators Network (OPEN).