The OPTICC Center approaches optimizing evidence-based intervention implementation as a three-stage process. Within each stage, multiple methods are used to move the intervention through to the next stage of optimization. Below, the limitations of methods currently used by the field of implementation science are outlined and the new methods that OPTICC will employ are detailed. Each new method will be applied across OPTICC-funded studies, refined each year, and built into massive open online courses (MOOCs) and toolkits.
Identify & Prioritize Determinants
Provide researchers and implementers with robust, efficient methods for determinant identification and prioritization enables the precise targeting of high-priority problems.
Use causal pathway diagrams to show relationships between strategies, mechanisms, moderators, and outcomes, clarify how strategies function, facilitate effective matching to determinants, and identify conditions affecting strategy success.
Support rapid testing in analog (artificially generated experimental conditions) or real-world conditions by testing causal pathways to maximize the accumulation and use of knowledge across projects.
Multiphase Optimization Strategy
Collins’ multiphase optimization strategy (MOST) is used to assess intervention components and articulate optimization criteria (e.g., per participant costs) for constructing effective behavioral interventions.
An extension of MOST that emphasizes constructing explicit representations of hypothesized causal pathways that connect strategies to mechanisms, barriers, and outcomes for planning evaluations and organizing evidence.
A principled method of technology development that focuses on the needs and desires of end users to create compelling, intuitive, and effective interfaces.
Stage 1: Methods to Identify & Prioritize Determinants
This stage uses methods to identify determinants of implementation success that are active in the specific implementation setting of concern. Strategies not well-matched to high-priority determinants operating in the implementation setting are unlikely to be effective. However, the existing methods for identifying and prioritizing determinants have at least four limitations:
- They typically do not consider relevant determinants identified in the literature;
- They are subject to issues of recall, bias, and social desirability;
- They do not sufficiently engage the end user in the EBI prior to assessment;
- Determinant prioritization typically relies on stakeholder ratings of perceived qualities like feasibility to address determinants, which may have little to do with impact or being essential for success.
To address these limitations, we have developed four new, complementary Stage I methods to use in OPTICC Center studies:
Rapid Evidence Reviews
Rapid evidence reviews are used to summarize and synthesize research literature on known determinants for implementing evidence-based interventions in settings of interest. Unlike traditional systematic reviews, which can take more than a year, rapid reviews can be completed in three months or fewer.
Rapid evidence reviews are increasingly used to guide implementation in healthcare settings, but they are not often employed to identify implementation determinants.
Research Program Core experts collaborate with project leads and the practice partners to clarify the question and scope of each review. Data abstraction focuses on identified determinants and any information about timing (i.e., implementation phase), modifiability, frequency, duration, and prevalence.
This process results in a list of determinants organized by consumer, provider, team, organization, system, or policy level that is used to inform observational checklists and interview guid es for rapid ethnographic assessment.
Rapid Ethnographic Assessment
Rapid ethnographic assessment is used to efficiently gather ethnographic data about determinants by seeking to understand the people, tasks, and environments involved from stakeholder perspectives. This is achieved primarily by engaging stakeholders as active participants and applying user-centered approaches to efficiently elicit information.
Ethnographic observation includes semi-structured observations, including shadowing intended or actual evidence-based intervention users, which generates evidence that can be used to offset self-report biases. Through combined written and audio-recorded field notes, researchers document activities, interactions, and events (including duration, time, and location); note the setting’s physical layout; and map flows of people, work, and communication.
For a range of experiences, ethnographic interviews are informal during observation and yet formal through scheduled interactions with key informants. Interviews are unstructured, descriptive, and ask task-related questions. Researchers document the occurrence or presence of barriers, noting the duration, time, location, and affected persons or processes.
This method captures new and different perspectives from those captured during observations and interviews. Design probes are user-centered research toolkits that utilize items such as disposable cameras, albums, and illustrated cards. End users are prompted to take pictures, make diary entries, draw maps, or make collages in response to tasks such as “Describe a typical day” or “Describe using [the evidence-based intervention]”.
Participants have one week to observe, reflect on, and report experiences to generate insights, reveal ideas, and illuminate their lived experiences as they relate to implementing the evidence-based intervention (e.g., feelings, attitudes). In follow-up interviews, participants reflect on their engagement with the task.
Through memo writing, research team members analyze the data generated from design probes and interviews to identify new determinants, to corroborate determinants discovered via rapid evidence assessment, and to describe the meaning and importance of determinants to end users.
This method helps overcome the limitation of assessing stakeholder perceptions in a vacuum that can happen when only interview and observation data is considered.
Prioritization Based on Impact
Criticality: How a determinant affects or likely affects an implementation outcome. Some determinants are prerequisites for outcomes (e.g., awareness of the evidence-based intervention). The influence of other determinants on outcomes depends on their potency (e.g., strength of negative attitudes).
Chronicity: How frequently a determinant event occurs (e.g., shortages of critical supplies) or how long a determinant state persists (e.g., unsupportive leadership).
Ubiquity: How pervasive a determinant is.
For each identified determinant, granular data is generated by the rapid evidence review, rapid ethnographic assessment, and design probes and organized in a table by these three criteria (criticality , chronicity, and ubiquity). The data is then independently rated by three researchers using a 4-point Likert scale from 0 to 3 (e.g., not at all critical, somewhat critical, critical, necessary) and three stakeholders. Priority scores and inter-rater agreement are calculated. The outcome is a list of determinants ordered by priority scores.
Stage 2: Methods to Match Strategies
Drawing on Agile Science , the OPTICC Center is developing methods to create causal pathway diagrams that represent the best available evidence and hypotheses about mechanisms by which implementation strategies impact target determinants and downstream implementation outcomes. OPTICC’s Research Program Core supports study leads to develop causal pathway diagrams, which will serve as an organizing structure of our relational database for accumulating knowledge.
A causal pathway diagram includes several key factors:
- The implementation strategy intended to influence the target determinant
- A mechanism by which the strategy is hypothesized to affect the determinant
- The prioritized target determinant
- The observable proximal outcomes for testing mechanism activation and any precursors to implementation outcomes
- Preconditions for the mechanism to be activated and to affect outcome(s)
- Moderators (intrapersonal, interpersonal, organizational, etc.) that are hypothesized to impede strategy impact
- Any implementation outcomes that should be altered by determinant changes
Causal pathway diagrams benefit implementation science by:
Driving precision in use of terms for easier comparison of results across studies
Articulating hypotheses about the roles of factors that influence implementation strategy functions, enabling explicit testing of these hypotheses
Formulating proximal outcomes that can be assessed quickly with rapid analog methods
Informing the choice of study designs by clarifying temporal dynamics of represented processes and constraints (e.g., preconditions) that a study must account for
Making evidence more useful and usable
Implementation mechanisms are events or processes by which implementation strategies influence implementation outcomes. Researchers frequently under-specify or mis-specify key factors by labeling them all “determinants,” without declaring the factor’s role or roles in a strategy’s operation. It is rare for the published literature to even establish an implementation strategy mechanism, and further, tests of hypothesized mechanisms often overlook proximal outcomes and preconditions.
To efficiently test an implementation strategy, proximal outcomes must be identified: These are observable, measurable, short-term changes that rapidly indicate strategy impact on the mechanism, determinant, and concrete behaviors that lead to distal implementation outcomes. Proximal outcomes should capture changes that could be influenced by a single dose of the implementation strategy or be detected immediately after strategy exposure.
Causal pathway diagrams can have one or more proximal outcomes. Some proximal outcomes measure concrete behaviors reflective of desired distal outcomes, and can signal a strategy’s overall promise. Others are intended to capture mediators of distal outcomes, such as treatment quality. One way to identify proximal outcomes is to work backwards from distal implementation outcomes by asking what steps—behaviors or change in perceptions or attitudes—must occur to achieve those outcomes.
A precondition is a factor that is necessary for an implementation mechanism to be activated. These include intrapersonal, interpersonal, organizational, or other factors that must be in place, but might not be currently, for the strategy to activate the mechanism or for the mechanism to affect the determinant. Necessary conditions (i.e., preconditions) need to be included in the causal diagram so researchers know to measure them and practice partners can ensure they are in place.
A moderator is a factor that increases or decreases the level of influence that an implementation strategy has on a prioritized outcome. Multiple factors can amplify or weaken implementation strategy effects. Like preconditions, moderators operate at intrapersonal, interpersonal, and organizational levels and can affect a strategy’s influence at multiple points on the causal path. This includes the ability to activate a mechanism, influence the target determinant, or achieve a desired implementation outcome.
All potential moderators should be enumerated; detailing their likely strength, prevalence, and ability to be measured carefully considered; and included in the causal diagram to justify their examination in a study or to inform strategy (re)deployment in practice.
Stage 3: Methods to Optimize Strategies
The OPTICC Center is developing and refining methods, guidelines, and decision rules for efficient and economical optimization of implementation strategies, with the objective of helping researchers and stakeholders construct strategies that precisely impact their target determinants. Implementation science has traditionally taken the following approach to implementation strategy development: Researchers conduct an assessment to understand the barriers present in their setting, then develop a multicomponent or multilevel implementation strategy to address the barriers, pilot the strategy, and finally evaluate it in a randomized controlled trial (RCT). This approach has four major limitations:
(1) The reliance on RCTs for experimental control and the focus on distal implementation outcomes limits researchers’ ability to understand if components of a strategy are influencing the barriers they are meant to target.
(2) The evaluation approach makes optimizing strategies difficult. An RCT of a multilevel strategy provides only limited information about which components drove the effect, if all components are needed, and how the strategy should be changed to be more effective.
(3) The jump from a feasibility pilot to an RCT leaves little room for optimizing strategy delivery such as ensuring the most effective and efficient format (e.g., in-person vs. online), source (e.g., researcher versus stakeholder input), or dose is used.
(4) This approach is very resource intensive with limited opportunities for learning. That is, when trials generate null results, as they often do, determining why is nearly impossible.
Drawing on MOST and other underused experimental methods, we are developing guidelines for selecting experimental designs that can efficiently answer key questions at different stages of implementation research and obtain the right level of evidence needed for the primary research question. We prioritize signal testing of individual strategies to identify most promising forms and studies for optimizing blended strategies before testing them in a full-scale confirmatory RCT. Drawing on user-centered design, we are refining methods for ideation and low-fidelity prototyping to help researchers consider a broader range of alternatives for how an implementation strategy can be operationalized, which enables efficient testing of multiple versions and selection of the version that is most likely to balance effectiveness and burden or cost.
While randomized controlled trials provide robust evidence for strategy effectiveness, they do not provide a way to efficiently and rigorously test implementation strategy components. Faced with a similar problem in behavioral intervention science, MOST was developed to help behavioral scientists use a broader range of experimental designs to optimize interventions. The OPTICC Center is leveraging these designs to optimize strategies, including: factorial experiments, microrandomized trials, sequential multiple assignment randomized trials (SMARTs), and single case experimental designs. These designs, are highly efficient, requiring far fewer participants to test strategy components than a traditional RCT, enabling a range of research questions to be answered in less time and with fewer resources.
Ideation & Low-Fidelity Prototyping
Ideation is the process of creating many versions of an idea. Ideation strategies include sketching, brainstorming, and timed idea generation in response to a prompt. Low-fidelity prototyping is rapid prototyping of design elements for early feedback from users, so designers can iteratively refine designs. Evidence from human-computer interaction shows that for some tasks, like strategy optimization, parallel prototyping—creating multiple designs in parallel vs. starting with a single design and iteratively refining it—is more efficient and results in higher quality designs. Methods for ideation and low-fidelity prototyping can help practice partners and researchers consider a broad range of alternatives for operationalizing a strategy.
Rapid Analog Methods
Also known as RAMs, rapid analog methods create artificially generated situations for efficient, economical testing of strategies and mechanisms in conditions comparable to the real world. Analog studies involve an artificial representation of stimuli or occur in comparable, manufactured settings. Because you can isolate pathways of effects and limit burden on community partners, RAMs are ideal when testing a new strategy. These methods provide a signal of whether the implementation strategy is operating via the hypothesized mechanisms.
This type of design is best for optimizing complex strategies because they efficiently screen multiple components for an effect on target outcomes. Each component is a “factor” that can take several “levels” (e.g., yes vs. no; delivery source). Participants are randomized to cells corresponding to different combinations of levels of each factor allowing for analysis of main effects and interactions with fewer participants compared to randomized controlled trials.
MRTs evaluate implementation strategy components that are delivered repeatedly (e.g., automated reminders). Each time (called a “decision point”) that a component can be delivered (e.g., patient visit), provision or non-provision of the component is randomized, allowing for multiple components to be randomized concurrently. Microrandomized trials are a highly efficient design that takes advantage of both within-subject and between-subject comparisons in order to estimate marginal main effects, changes in component effect over time, and moderating effects.
Sequential Multiple Assignment Randomized Trials (SMARTs)
SMARTs optimize adaptive strategies and help researchers determine decision rules for delivering a sequence of strategies that satisfy a set of optimization criteria, usually effectiveness and cost. Participants are initially randomized to two strategies that differ in intensity or cost and at predetermined times nonresponders are re-randomized to another set of strategy options; this can occur multiple times. SMARTs are highly efficient because analyses can use different sample subsets to answer different research questions e.g., differences between strategies and the optimal way to support nonresponders.
Single Case Experimental Designs
SCEDs gather evidence about strategy effects by observing changes in outcomes of interest for each participant (or unit, e.g., clinic). SCEDs are inherently within-subject designs with participants acting as their own controls. This is achieved through sequencing strategy exposures and comparing outcomes for periods when a participant was exposed to those when no strategy was provided. Several SCED designs exist, such as ABAB and multiple baseline. SCEDs require as few as six participants to provide information about effects, making it highly efficient with the low participant requirement. These features are particularly useful for preliminary implementation studies in a single clinic.
Testing and refining OPTICC Center methods through the Implementation Lab
Researchers and implementers can begin their work in any of OPTICC’s evidence-based intervention implementation stages and move forward or backward depending on their optimization goals.