~AEA Public Library

AEA 2016 RTD Session 1539: The Influence of Domain-Specific Metric Development on Evaluation and Design: An Example from National Institutes of Health Technology Development Programs 

11-16-2016 12:43

Even within the sub-field of research, technology, and development evaluation, there are specialized domains for which it may be appropriate to consider designing tailored evaluation metrics. This session will focus on the development of evaluation metrics for National Institutes of Health-funded projects that have the primary purpose of technology development. The presentations will provide perspectives across all stakeholders in the evaluation design, including: the program manager, who may wish to use the metrics to assess a specific technology development program and provide evidence for the program's "success"; and the evaluation professional who must determine how to develop domain-specific metrics; and the evaluation professional who must implement an evaluation based on the proposed metrics. The final presentation will engage the audience in conversation around when and how such domain-specific metrics should be used to evaluate technology development programs, particularly for programs that were not explicitly considered when the metrics were developed.

#2016Conference #Eval16

Statistics
0 Favorited
5 Views
5 Files
0 Shares
75 Downloads

Related Entries and Links

No Related Resource entered.

Tags and Keywords

Attachment(s)
pptx file
How does technology development metric development influe...   402 KB   1 version
Uploaded - 11-16-2016
While on the surface, the idea of developing technology-development specific metrics seems to be worth pursuing, with deeper digging, a number of questions arise that are worth further discussion. For example: • Appropriateness: o When is it appropriate to develop a bank of metrics for a specific domain of projects? o Should such metrics be developed for technology development programs? • Generalizability: o Can the metrics be applied to technology development programs that were not considered specifically during the metric development process? o Can the metrics be applied to programs not funded by the National Institutes of Health (NIH)? • Evaluation synthesis: o Can investments in differing technology programs across NIH be assessed together to estimate an overall "impact" using these metrics? o Can investments across funding agencies be assessed together? This presentation will engage the audience around these questions, as we think carefully about metric development and how it influences evaluation design.
pptx file
Technology Program Office Perspective on Identifying Appr...   2.56 MB   1 version
Uploaded - 11-16-2016
The National Institutes of Health (NIH) support a broad array of innovative technology development research, insofar as it contributes to advancing biomedical research. In 2014, program officers from multiple institutes of NIH gathered to seek appropriate performance measures useful for characterizing outcomes across technology development programs. It was thought that appropriate performance measures would prove useful for program officers across the NIH for two reasons. In the near term, these performance measures would contribute immediately towards new program planning for designing effective program concepts and also for ongoing management of existing programs. These performance measures could also serve directly towards guiding a broad evaluation of NIH investments in technology development, generally, which would both assess the impacts of investments to date as well as allow future program officers the ability to consider the merits of different approaches across NIH using equivalent metrics.
pptx file
Process and Outcome Evaluation of the NCI Innovative Mole...   923 KB   1 version
Uploaded - 11-16-2016
Ripple Effect performed an extensive evaluation of the NCI IMAT program using a mixed-methods approach to better understand the outcomes of supported technologies. To inform the evaluation design, instrument development, and methodology, Ripple Effect relied on best practices literature on evaluating RTD programs as well as obtained input from an NIH advisory committee and an external subject matter expert panel. Ripple Effect gathered a variety of secondary data from sources such as PubMed, ClinicalTrials.gov, and USPTO, and linked these data in a SharePoint database for continued use by NCI. Ripple Effect conducted more than 80 interviews with IMAT grantees, collected more than 500 survey responses from IMAT and Comparison group grantees, and conducted more than two dozen interviews with technology end-users. This blend of quantitative and qualitative sources allowed for both standardized information across grantees and nuanced contextual data to yield a deeper understanding of the IMAT program.
pdf file
Development of Measures to Assess NIH Technology Developm...   200 KB   1 version
Uploaded - 11-16-2016
The National Institutes of Health (NIH) supports innovative technology development as one aspect of fulfilling its mission. In July 2014, NIH tasked the IDA Science and Technology Policy Institute (STPI) to develop performance measures for extramural technology development projects. The task had three components. The first was development of a catalog of NIH Funding Opportunity Announcements (FOAs) focused solely on technology development for achieving a specific goal. The second was development of case studies based on discussions with program officers knowledgeable about those FOAs. The third, based on the case studies, was identification of candidate outcome measures for assessing technology development initiatives and development of data collection approaches that would be required to implement these measures in a consistent and ongoing manner. This paper presents the results of the study, focusing on the logic of technology development and candidate outcome measures.
pptx file
session 1539 introduction   466 KB   1 version
Uploaded - 11-16-2016
session 1539 introduction