~AEA Public Library

What Can We Learn From a Collection of Over 500 Evaluation Reports? - Poster AEA 2015 

11-14-2015 20:39

The Building Informal Science Education (BISE) project coded 520 evaluation reports posted to informalscience.org. The poster familiarizing people with the project and our freely available resources. The resources include: 1) a coding framework for informal education evaluation reports, 2) an NVivo database with 520 reports that are coded based on our framework, 3) a spreadsheet that provides an additional way to search the coded reports, 4) a file with all the reports included in the database, 5) an Endnote library with citation information for all of the reports, and 7) an online community where people can find BISE synthesis papers and engage in conversations around the BISE database, findings, and implications for evaluation. These resources are relevant to a wide variety of evaluators, even those that don’t carry out evaluations of informal learning projects. Here you will find our poster, the BISE Coding Framework, and one of our synthesis papers. Access the rest of our freely available resources and papers at www.visitorstudies.org/bise

#YouthFocusedEvaluation #EnvironmentalProgramEvaluation #ArtsCultureandAudiences #evaluationreports #Informallearning #Reporting #2015Conference #ResearchonEvaluation #STEMEducationandTraining

0 Favorited
3 Files

Related Entries and Links

Tags and Keywords

pdf file
AEA 215 Poster   1.56 MB   1 version
Uploaded - 11-14-2015
pdf file
BISE Coding Framework   1.91 MB   1 version
Uploaded - 11-14-2015
The BISE team created an extensive coding framework to code all 520 reports included in the project database. Coding categories and related codes were created to align with key features of evaluation reports and the coding needs of the BISE white paper authors.
pdf file
Reporting for Evaluator Audiences (Synthesis Paper)   1.51 MB   1 version
Uploaded - 11-14-2015
There are a number of places evaluators can share their reports with each other, such as the American Evaluation Association’s eLibrary, the website informalscience.org, and organizations’ own websites. Even though opportunities to share reports online are increasing, the evaluation field lacks guidance on what to include in evaluation reports meant for an evaluator audience. If the evaluation field wants to learn from evaluation reports posted to online repositories, how can evaluators help to ensure the reports they share are useful to this audience? This paper explores this question through the analysis of 520 evaluation reports uploaded to informalscience.org. The researchers created an extensive coding framework to align with features of evaluation reports and evaluators’ needs. It was used to identify how often elements were included or lacking in evaluation reports. This analysis resulted in a set of guiding questions for evaluators preparing reports to share with other evaluators.