AEA Conference 2022

AEA Conference 2022 TIG Sessions

TIG Annual Meeting

  • Tuesday November 15 5:00 PM - 6:00 PM (Eastern Standard Time)
  • Presentation: TIG Year in Review
Agenda
- Learn about volunteer opportunities
- Meet like-minded people
- Hear the TIG Year in Review

Creating a Big Data Analytics Platform Using National Datasets to Evaluate the Impact of Community Investments

  • Wednesday 4:15 PM - 5:15 PM (Central Standard Time)
  • Session Chair: Peter York
  • Session Type: Panel

Programmatic, organizational, and policy/advocacy investments directly or indirectly attempt to improve communities. Evaluating the impact of these investments has been challenging; the field lacked the technological capacity to gather and analyze the data required. The confluence of the following vital advances makes this no longer true: (1) open access to reliable, valid national community-level data (e.g., the US Census Bureau’s American Community Survey (ACS); local nonprofit and philanthropic financial revenue and program expenditure data from the IRS; mapping datasets like Google Maps; etc.); (2) powerful machine learning algorithms that can be trained to conduct complex geospatial longitudinal quasi-experimental observational studies of the impact of investments to improve community well-being; and (3) the ability to use community identity metrics to find and mitigate bias. This session shares how an automated big data analytics platform has been developed and used to evaluate the impact of community investments for achieving equitable well-being.

The Data Revolution in Low- and Middle-Income Countries: Opportunities and Challenges for Evaluations

  • Thursday 10:15 AM - 11:15 AM (Central Standard Time)
  • Session Chair: David Yamron; Facilitator: Paul Jasper
  • Session Type: Expert Lecture

The World Development Report 2021 argues that, ‘innovations resulting from the creative new uses of data could prove to be one of the most life-changing events of this era.’ The same holds for evaluations in Low- and Middle-Income Countries. In this session, we argue that evaluations are being affected across the data-value chain: data production, management, analysis, and dissemination are all changing. We exemplify this by presenting three case studies of evaluations implemented by Oxford Policy Management that made use of new types of data (social media data, satellite imagery), new analytical methods (natural language processing, machine learning), and interactive ways of disseminating results. These cases studies show how the skillset needed to implement evaluations in this new data ecosystem is changing - and increasingly so. We end by discussing challenges that this poses to the profession across the world, including the risk of increasing inequalities and north-south power imbalances.

Transforming the Tedious into Timely Systems: Using Google Workspace to Automate Reporting

  • Thursday 10:15 AM - 11:15 AM (Central Standard Time)
  • Session Chair & Facilitator: Brittany Hilte
  • Session Type: Demonstration

In this day and age, we have so much technology at our fingertips but we may not know how to meaningfully integrate this technology into our evaluation processes. This demonstration session will tap into the free system of Google Workspace (formerly Google Suite) and show you a way to automate your reports so that you can focus your mental energy on the more important aspects of synthesis and engaging in dialogue with invested groups around how to use the data. This process is especially useful when you are using the same survey multiple times (e.g., post-PD survey, multi-site evaluations). Additionally, while evaluations often focus on end users like funders and CEOs, this process centers equity by focusing on getting actionable data back to the on-the-ground practitioners (e.g., PD facilitator, site-level staff) so that they can use that information to inform and improve their work.

Leveraging the power of civic tech for evaluation purposes?

  • Thursday 2:15 PM - 3:15 PM (Central Standard Time)
  • Session Chair & Facilitator: Leon Hemkemeyer
  • Session Type: Expert Lecture

Civic technology, in short civic tech, can be understood as technology that has the potential of democratising societies. Civic tech tools can help make governance more participatory, transparent and accountable. In fact, there is an impressive variety of civic tech tools that have been developed and deployed around the world – ranging from tools that collect citizen feedback on public service delivery, to tools that monitor public decision-making processes (such as elections), to tools that identify, open and visualise data of public interest. This lecture exhibits how the power of civic tech can be leveraged for evaluation purposes. It demonstrates that the inconceivable wealth of evidence that is generated by civic tech tools around the world presents an untapped resource for many evaluators – regardless of the sector in which an evaluation is carried out.

African perspectives on responsible data governance in M&E

  • Thursday 3:30 PM - 4:30 PM (Central Standard Time)
  • Session Chair: Jerusha Govender; Discussant: Linda Raftree
  • Session Type: Panel

The COVID-19 pandemic accelerated the adoption of digital platforms and tools. This has offered huge benefits for monitoring and evaluation, including the emergence of new and diverse data sources and the possibility of conducting remote M&E. At the same time, the abundance of data being collected (with and without the knowledge of individuals), the public's increasing awareness of the dangers of expanding tracking and automated decision making, and the trend towards adoption of national data privacy regulations have all put a spotlight on the importance of responsible data governance. What does all this mean for M&E practitioners?  In 2021, a guide on responsible data governance in M&E was developed to assist in answering questions related data governance in the African context and providing guidance to M&E practitioners and stakeholders. In the session we will discuss findings and the role of evaluators as responsible data stewards and users.

A Demonstration to Collaboratively Explore the Design of Monitoring and Evaluation Systems: Introducing the No-code Database to the Evaluator's Toolbox

  • Thursday 4:45 PM - 5:45 PM (Central Standard Time)
  • Session Chair & Facilitator: John Baek
  • Session Type: Demonstration

This demonstration will review design principles of monitoring and evaluation (M&E) systems and then build a database-driven application on the fly, integrating features requested by audience members. This presentation will show how the tool works, so the audience member can evaluate whether a database solution can meet their evaluation needs. This session is meant for all audience levels, though some experience or need to design and implement a M&E system is helpful.

The Challenges of Applying Technological and Digital Data to Evaluation: Lessons from the BEWERS Project Evaluation Study in Southern Kaduna, Nigeria

  • Thursday 4:45 PM - 5:45 PM (Central Standard Time)
  • Session Chair & Facilitator: Blessing Christopher
  • Session Type: Roundtable

The Evaluation of Christian Aid's Building Early Warning and Early Response Systems for conflict management (BEWERS) project faced a host of challenges in deploying digital solutions to Evaluating the impact of the project in target communities. BEWERS was a peace initiative implemented in Southern Kaduna in Kaduna state Nigeria. It targeted poor and hard-to-reach-communities in Kaura LGA where there have been frequent violent clashes between nomadic herdsmen and host communities. While the monitoring component of BEWERS was driven by community members who are trained and supported to monitor and collect early warning data using digital solutions and engaging  stakeholders to ensure the situation doesn’t escalate into violent clashes, this presentation discusses how the challenges encountered in implementing this approach from a digital perspective hoping to share lessons and trigger discussions on how projects in poor and isolated communities can benefit from the value addition of engaging Digital solutions in Evaluations.

Trends in African MERL Tech: Insights from a Landscape Scan

  • Thursday 4:45 PM - 5:45 PM (Central Standard Time)
  • Session Chair: Linda Raftree; Discussant: Zenda Ofir
  • Session Type: Panel

In 2022, we conducted a landscape study, funded by the Mastercard Foundation Impact Labs, on how existing and emerging technologies are used for Monitoring, Evaluation, Research and Learning (MERL) in African contexts. We identified partners, initiatives and solutions that support MERL with a focus on equity and inclusion, youth and community empowerment, indigenous knowledge, real-time decisions, and future-focused scenarios. We will share our findings, including broad continental trends and country specific case studies and what these suggest for the wider field of evaluation. This will be a learning- and sharing session exploring how technological innovations are influencing and enriching the MERL practice and how these approaches, if designed and implemented responsibly, can support equity, inclusion, participation, and evaluative processes rooted in context. We'll invite participants to share their own experiences and to consider ways that decolonization, new players, and emerging technologies are influencing evaluation on the African continent and globally.

Democratizing evaluation using participatory video

  • Friday 8:00 AM - 9:00 AM (Central Standard Time)
  • Session Chair: Morganne King Wale; Facilitator: Dylan Diggs
  • Session Type: Roundtable

Participatory video evaluation (PVE) promotes democratization of the evaluation process, putting decision making power in the hands of project participants and promoting peer to peer exchange, producing meaningful learning products and creating space for participants to use their own voices to share their experience. This approach is complimentary to community accountability mechanisms as they both place the participant voice at the center of the process. War Child Canada has worked with participants in our education and livelihoods projects in Democratic Republic of Congo, South Sudan, and Uganda to apply PVE to identify results and learning opportunities from our programming. These experiences, and findings from an external evaluation, will form the basis for discussion of the benefits and limitations of PVE, its applications in complex environments, and opportunities to further embed this methodology as a common evaluation practice globally.

Enhancing Humanitarian Cash & Voucher Assistance Programming through interoperable technologies - An evaluation of integrating mobile data collection tools, biometrics, and electronic voucher payment platforms

  • Friday 10:30 AM - 12:00 PM (Central Standard Time)
  • Session Chair & Facilitator: Alex Tran
  • Session Type: Roundtable

Humanitarian cash and voucher assistance (CVA) programs must operate at unprecedented scale to meet the growing needs of displaced and conflict affected persons. To keep up with growing humanitarian needs, humanitarian actors such as Mercy Corps have integrated various different technologies into CVA programs to enhance their efficiency and effectiveness as well as provide greater insights into how CVA program activities can be adjusted to better meet the needs of displaced and conflicted affected persons. Join us for an interactive roundtable discussion where we will present and discuss methods used to evaluate the implementation of an interoperable technology stack (mobile data collection, electronic voucher distribution systems, biometric fingerprinting) and the results of those technologies in enhancing the efficiency and effectiveness of key phases and overall reach of Mercy Corps’ humanitarian CVA programs in northeast Nigeria. We hope to further discussions of evaluative methods of technologies in humanitarian programs.

Incorporating Geospatial Data Tools Into Evaluations

  • Friday 10:30 AM - 12:00 PM (Central Standard Time)
  • Session Chair & Facilitator: Katie Butler
  • Presenters: Kara Haberstock Tanoue, Tracy Bain, Marcel Foster, Amanda Aragon
  • Session Type: Demonstration

This cross-sector session will provide (1) a primer on how geospatial data can supplement evaluations to uncover new and interesting conclusions and (2) a toolkit of online geographic resources available for use by evaluators with varying levels of geographic analysis knowledge. Attendees receive the toolkit, which includes descriptions of accessible tools and their uses, benefits and drawbacks, links to the tools, and recommendations for training and getting started with each tool. Examples of these tools include: 

  • Basic options accessible to those with minimal training, like Google Maps, where any user can create a map locating points of interest. 
  • Moderate options like the Data Basin project, a free science-based mapping and analysis platform that can be used to compile ready-made map layers to tell a story. 
  • Advanced tools like QGIS and ArcGIS, which require training but offer powerful analytical capabilities for computing spatial statistics.

Machine Learning Evidence Generation for Equity: How Program Experts, Administrative Data, and Machine Learning Algorithms Can Collaborate to Produce Timely, Rigorous Evaluations that Reduce Bias

  • Friday 3:15 PM - 4:15 PM (Central Standard Time)
  • Session Chair: Peter York
  • Session Type: Panel

Program administrative data are proliferating, allowing direct service organizations to use this data to evaluate their programs. Data science has advanced rapidly, including the use of machine learning (ML) algorithms for prediction, prescription, and evaluation. However, we all know that data capturing the actions, transactions, and decisions of humans are biased, resulting in algorithms that perpetuate and even amplify these biases. So, how do we use the technological advances offered by machine learning algorithms for evaluating programs, without perpetuating biases? This session will share how direct service organizations are conducting rigorous machine learning evaluations by intentionally partnering evaluators, practice/program experts, and data scientists, with program experts in the lead, to train algorithms to quasi-experimentally evaluate what works, for whom, and in what contexts, while minimizing all types of selection and identity biases inherent to the data. This panel session will include sharing two real-world case examples.

Big data and evaluation: addressing potential sources of bias affecting the understanding of equity and social justice

  • Saturday 8:00 AM - 9:00 AM (Central Standard Time)
  • Session Chair & Facilitator: Michael Bamberger
  • Session Type: Panel * Presidential Strand *

The data collection capacity and analytical power of big data  makes it  inevitable that these tools and techniques  will become a standard part of the evaluators toolbox. However, an issue of concern is how the use of big data creates new sources of bias that evaluators must understand and address. Importantly, these biases can result in lack of attention to, or the misunderstanding of issues affecting equity and racial justice.   Data scientists work with administrative and other secondary data that were originally collected for a different purpose, and the data may be used without fully understanding how and why it was collected, and how appropriate it is for addressing a particular policy issue.   The session will provide a framework for identifying sources of bias at each stage of the evaluation cycle, particularly as they affect issues of equity, and will propose guidelines on how to address these biases.   

Considerations and Implications for Modernizing Data Collection and Management Systems

  • Saturday 9:15 AM - 10:15 AM (Central Standard Time)
  • Session Chair: Samantha Gross; Discussant: Anjum Mandani
  • Session Type: Multipaper

Digitization of data and modern technologies can transform evaluation practice by making data more accessible and data management systems more efficient. As evaluators and data scientists in global public health move towards modernized platforms and processes for data collection, management, and visualization, we must consider the needs of our partners and anticipate the implications of having more modern systems for all partners at all levels. Ensuring data management systems increase efficiency and support our partners in reporting, accessing, and using their data is a critical consideration for ethical evaluation practice.