About SETIG

The Systems in Evaluation Topical Interest Group (SETIG) was a community created within the American Evaluation Association to provide a forum for ongoing conversation about the use of systems thinking and systems theory in evaluation.

SETIG developed from a series of sessions held during the 2002 AEA Annual Meeting in Washington DC that focused on systems theory and systems thinking. The TIG held its first planning meeting in Atlanta in 2004 (attendance: 4), and sponsored its first sessions in the 2005 AEA conference in Toronto.

SETIG focuses on the use of systems thinking and systems theory as a framework for evaluation planning, design, implementation, analysis, and reporting, across a wide range of content areas, and using a diverse array of evaluation approaches, research methods, and data collection and analysis tools.

Areas of interest to TIG members include:

  • Evaluation approaches that use systems theory to plan, design, and implement evaluation
  • Discussion about how to ground evaluation methods and approaches in systems thinking and theory
  • The contributions of diverse perspectives in understanding issues related to the use of systems thinking and theory in evaluation

TIG leadership places a priority on welcoming members at all levels of experience, comfort, and familiarity with systems thinking. We have a diverse membership of professionals from 35+ countries spanning a wide range of experience in systems thinking and in evaluation. Several members have published books focusing on systems thinking and evaluation, and a number of TIG members have served on the AEA Board, including 2014 AEA President Beverly Parsons.

A priority for the TIG has been to craft offerings and activities that are both interesting and accessible to all systems thinkers, whether novice or experienced. As a result, we’ve seen the focus of our conference sessions evolve along the following lines:

  • defining this thing called systems thinking
  • how we’re using systems thinking in our work
  • how we can support others in the use of systems thinking in evaluation practice
  • developing guiding principles for the use of systems thinking in program evaluation

The SETIG Leadership

The SETIG Leadership Team consists of multiple positions: co-chairs, co-program chairs, and connectivity leads. Each member of the leadership team holds a two-year term, with a previously elected member overlapping with newly elected members to allow a smooth transition. Each role includes the following responsibilities among others that individuals bring to the role: 

  • Co-Chair: lead the TIG by organizing the TIG business meeting at AEA conferences, hosting leadership meetings during the year, representing the TIG in AEA meetings  
  • Program co-chair: coordinate peer review process for AEA conference sessions and work with co-chairs to organize virtual networking and learning sessions between conferences 
  • Communications lead: keep website updated, send announcements to members, and use social media to connect TIG members 

Elections are usually held by online ballot, with the newly elected leadership being announced at the following SETIG Annual Meeting.

You may contact the leadership team through their individual emails, through systemsinevaluationtig@gmail.com, or through this form.

2025 SETIG Leadership

                                       
Tjip Walker, Chair (2024-2026)

Dr. Tjip Walker is an accomplished systems practitioner and currently the director of The Systems Practice Lab, a consultancy dedicated to applying systems thinking to complex social problems.

Previously, he served in several senior positions over a 25-year career with the US Agency for International Development (USAID), in Washington and overseas, including establishing the agency’s Office of Learning, Evaluation and Research and serving as its inaugural deputy director. More recently, he led the efforts to articulate and implement the Local Systems Framework, USAID’s commitment to sustained development that puts systems change at the center of its work.  In support of that effort, he worked across the agency to spread good systems practice by encouraging innovation, facilitating an internal community of practice with over 400 members, identifying promising tools and approaches, and integrating them into Agency programming. Identifying and disseminating ways to measure and evaluate systems change was an important—and challenging—part of that work. 

Email: tjipwalker@pm.me 

Sweta Nanjuru, Co-Chair (2025-2027)
Sweta Nanduru serves as the Evaluation Manager at Grail Family Services (GFS), where she spearheads program impact assessment and data-driven initiatives aimed at empowering families and fostering community well-being. In her role, Sweta is dedicated to developing robust data collection and evaluation frameworks that inform strategic decision-making and continuous program improvement. She collaborates closely with families, schools, and community partners to ensure that GFS’s services are effective, equitable, and aligned with community needs.

With a Master’s degree in Healthcare Leadership & Management from UT Dallas, Sweta’s career spans consulting, healthcare management, and program evaluation. Her background equips her with a unique perspective on applying systems thinking to complex challenges in both healthcare and community-focused settings.

Email: snanduru@gfsfamilyservices.org

Tze-Chang Liu, PhD, Program Co-Chair (2024-2026)

Dr. Tze-Chang Liu is an Associate Professor at the National Chung Hsing University’s Center of Teacher Education in Taichung, Taiwan.

He has been a member of AEA for ten year and is currently working closely with the Taiwan Ministry of Education in digital learning and STEAM education, which incorporates numerous research methods and evaluation practices and research. He also has experience in international collaboration.

Email: tcliu0215@gmail.com

Tanushree Banerjee, Program Co-Chair (2024-2026)

Doctoral student, Research, Assessment and Evaluation Program, School of Education, Virginia Commonwealth University

I am a doctoral student in the Research, Assessment and Evaluation concentration of the Ph.D. in Education program in the School of Education at Virginia Commonwealth University. One of the emphases of the Ph.D. program is to integrate theory and practice. As part of this program, I had the opportunity to work on a large-scale program evaluation of an Urban Residency Teacher Preparation Program. 

Traditionally, a program theory of change emphasizes a linear causal linkage to support intended program outcomes.  My doctoral research focuses on non-linearity and complexities of a program as part of a larger system.  This research examines how systems-thinking approaches to program evaluation are used and how the study interactions among multiple program components can guide program improvement and progress towards outcomes.

Email: banerjeet@vcu.edu 

Kristen Rohanna, Website Coordinator (2025 – 2027)

Dr. Kristen Rohanna is an Associate Adjunct Professor in the Educational Leadership Program (ELP) and Social Research Methodology (SRM) division at the University of California, Los Angeles (UCLA).

She has been a practicing evaluator for over 15 years. Her practice and scholarship focus on using evaluation, continuous improvement, and systems thinking methods to effect social change, particularly in education. She received her Ph.D. in Social Research Methodology with an emphasis in Program Evaluation from UCLA, her M.A. in Demographic & Social Analysis from UC Irvine, and a B.A. in History from the University of Pittsburgh.

Email: krohanna@ucla.edu

 
Jonny Morell, Cross-Pollinator (2025-2027)
Dr. Jonny Morell is an organizational psychologist with extensive experience in the theory and practice of program evaluation. He believes that evaluation requires a systems approach because interventions produce complex outcomes, but that evaluations should be as simple and straightforward as possible. Jonny has been recognized by the American Evaluation Association, who awarded him their Paul F. Lazarsfeld Evaluation Theory Award.

Jonny has lectured and consulted internationally on complexity in evaluation. His activity deals with the integration of agent based modeling and traditional evaluation, methodologies for evaluating unintended consequences, interactions among models and empirical data, and how to conduct dialogue between funders and evaluators concerning complex behavior.

Email: jamorell@jamorell.com

 

John Murray, Cross-Pollinator (2025-2027)

John Murray (he/him) is a systems thinker, collaborator, educator, and coach based in beautiful Saint Paul, Minnesota. John has served since 2020 as an Evaluation Specialist at the University of Minnesota Extension.

John is energized by opportunities to collaborate with others in weaving together insights from a variety of experiences and contexts to create meaningful connections and work to address seemingly intractable problems.

John holds a B.A. in Cross-Cultural Communication and Ethics from Prescott College in Arizona, an M.A. in Evaluation Studies from the University of Minnesota and is currently a Ph.D. Candidate researching the use of systems thinking and complexity science by practicing evaluators. Mostly, he enjoys traveling with his family and has, subjectively, a few too many outdoor hobbies.
Email: murr0328@umn.edu