~AEA Public Library

Assessing Organizational Readiness for Developmental Evaluation within the context of the CTSA Program 

11-22-2017 10:17

​AEA 2017 Think Tank Session - Fri, Nov 10, 2017 (01:45 PM - 03:15 PM)

ABSTRACT:

This think tank will investigate the question, “What are the critical factors or conditions that create ‘readiness’ for developmental evaluation (DE) in the context of the Clinical and Translational Science Awards (CTSA) program?”  Readiness can be conceptualized at both an individual and an organizational level.  Both are extremely relevant, especially in the context of CTSA-funded academic medical centers.

Evaluators within CTSA-funded institutes will be specifically recruited for this session.  After being introduced to the concept of DE and the challenges that are arising in using a DE paradigm in the context of CTSA, participants will break into small groups for facilitated small-group discussions.  One portion of the discussion will focus on the question of what makes individual program leaders ready or not ready to engage in DE.  The second portion will focus on the question of why some CTSA institutes are more receptive to DE than others.  Top-level takeaways from the small groups will be shared with the full group, and comparisons will be made to test for the consistency of the factors across the different small groups. The session will end with a group discussion about next steps, potentially including the development and testing of readiness assessment tools.

 

RELEVANCE STATEMENT:

This presentation is a follow-up to a panel discussion at the 2016 AEA conference that introduced the idea of using developmental evaluation (DE) in the context of the NIH-funded Clinical and Translational Science Awards (CTSA) program (Easterling, Dave, Harvey, Hogle & Blank, 2016). CTSA provides funding and other supports for a variety of programs that seek to accelerate the translation of scientific knowledge into therapies and practices that improve health (e.g., training programs, pilot-funding programs, research-support services, community engagement mechanisms). Last year’s panel brought together the evaluators from five CTSA-funded academic medical centers who have used the principles and tools of DE in order to organize their evaluation programs and promote learning within their respective CTSA-funded institutes. The panelists noted that DE represents a new and unfamiliar paradigm for evaluation within academic medical centers, especially among scientists who are accustomed to stable interventions and double-blind studies. Moreover the leaders of academic medical centers (most of which are large and hierarchical) are not necessarily amenable to the transparency and fluidity that DE requires. Despite these contextual obstacles, DE is beginning to gain a foothold in at least some CTSA-funded institutions. The panelists identified a number of instances in which the leaders of at least some of the programs within their CTSA institute willingly engaged in the critical analysis, give-and-take conversations and revisiting of assumptions that DE requires. In at least some CTSA-funded institutions, the institute leaders have been highly supportive of DE and have granted their evaluators the authority and validation that’s needed to provoke a critical assessment of how well a program is working and whether it needs to shift course. Given the appropriateness and potential benefits of using DE to evaluate CTSA-funded programs, what can be done to increase receptivity within academic medical centers? What factors create ripeness or readiness for DE? Patton (2011) points to the following pre-conditions that need to be in place for DE to be inappropriate: a) key stakeholders require high levels of certainty, b) there is a lack of openness to experimentation and reflection, c) key people are unwilling to “fail” or hear “bad news,” and d) there are poor relationships between evaluators and the people who design, manage and operate the programs. This list of inappropriate conditions implies that readiness needs to be built both among individual program leaders (e.g., attitudes, beliefs, behaviors) and within the larger institutions in which they reside (e.g., culture, incentives, policies, practices). Although Patton and others have identified logical pre-conditions for DE, there has been little research to date on the question of how to actually assess readiness in different organizational settings. And this territory is virtually uncharted in the context of academic medical centers. Academic institutions are founded on the principles of discovery, curiosity, hypothesis testing and challenging assumptions, but how robust are these conditions in practice, especially when investigators are presented with data that call into question of usefulness of a program that they are leading (especially if it is covering a portion of their salary)? What predisposing factors exist within academic medical centers that can be highlighted and cultivated to foster inquiry in the way that DE requires? And what are the most important countervailing factors that need to neutralized in order to create the sense of safety that allows program leaders to open themselves up to potentially negative findings? This session will surface at least preliminary answers to these questions in order to set in motion a longer term process of delineating the key dimensions of readiness that can be assessed and ideally manipulated.

Statistics
0 Favorited
16 Views
1 Files
0 Shares
21 Downloads

Related Entries and Links

No Related Resource entered.

Tags and Keywords

Attachment(s)
pptx file
Readiness for Developmental Evaluation in the CTSA context   588 KB   1 version
Uploaded - 11-22-2017
Introductory slides