Value Assessment And Decision Making In The Face Of Uncertainty


Editor’s Note

This post is part of the Health Affairs Forefront short series, “Value Assessment: Where Do We Go Post-COVID?” The series explores what we have learned about value assessment and related issues during the coronavirus pandemic, how we might think about value in health care going forward, and how these ideas might translate into policy. The series is produced with the support of the Innovation and Value Initiative (IVI) and grew out of a group of webinars hosted jointly by IVI and ISPOR—The Professional Society for Health Economics and Outcomes Research. Included posts are reviewed and edited by Health Affairs Forefront the opinions expressed are those of the authors.

The emergence of SARS-CoV-19 two years ago sparked global upheaval due not only to the physical danger posed by the virus but also to the immense and pervasive uncertainty it generated. This uncertainty was related to the virus itself, but also to the appropriate preventive, social and public health measures; the availability of effective treatments and hospital capacity; and more. The uncertainty in the science made policy-making challenging. Political pressures and the impetus to act quickly led to decision-making processes that were not transparent or well communicated and may have eroded trust in recommendations. The need to change guidance with the emergence of new evidence was often seen not as decisions to “follow the science” but instead as potential incompetence, leading to distrust in public health institutions and the scientists who lead them.

Uncertainty in health care decision-making is not novel, but it was brought to the fore in the context of the COVID-19 pandemic. The stakes in health care-related decisions are high—illness and health care often bring great physical and financial burdens—and pervasive uncertainty in the evidence on which decisions are based makes it particularly fraught.

Health technology assessment (HTA) is an established process to inform health care decision-making in the face of just such uncertainty. HTA involves the systematic assessment of the expected benefits, risks, and costs of a particular health care intervention based on the latest evidence; it provides insight into the intervention’s clinical and economic value that can support decisions about its appropriate and efficient use. A defining characteristic of a thorough HTA is an explicit process to evaluate the implications of uncertainty in the evidence for treatment recommendations and policy decisions.

With increasing interest in using HTA to guide coverage decisions and health policies, the COVID-19 experience has important lessons for value assessment of new interventions and its communication to stakeholders. We explore several of these here and outline recommendations for future improvement.

Uncertainty In Comparative Effectiveness

Randomized controlled trials (RCTs) provide the most rigorous evidence on efficacy but frequently only compare a new treatment to placebo or a single competing intervention. This is sufficient for regulatory approval. However, to be confident about the clinical value of a new therapy, we need relative treatment effect estimates versus all of the competing interventions that are regularly used in routine practice for the patient population of interest. This information can be obtained by conducting a network meta-analysis of the multiple RCTs that each compare a subset of all the competing interventions of interest.

This powerful methodology is frequently used to inform HTAs of new interventions. But, even if a credibly-performed network meta-analysis (or another type of indirect treatment comparison analysis) can provide valid findings, we may still be left with a degree of uncertainty in the resulting comparative effectiveness estimates (due to sampling error). This can make it impossible to tell whether the new intervention is more effective than existing therapeutic options.

It is important to realize the implications of this uncertainty: When we stick with existing treatment options there is a chance that we forgo health benefits. If we switch to the new treatment and it is worse, we get suboptimal outcomes as well. This uncertainty can be reduced via additional studies.  

In a context like the SARS-CoV-19 pandemic—featuring a rapid pace of research and new evidence, and a frequently shifting clinical context and treatment landscape—it is exceedingly challenging to keep current with the latest evidence needed to guide policy and practice. This challenge has triggered initiatives to synthesize this dynamic evidence base via “living” systematic reviews or (network) meta-analyses. Introduced a few years ago, this approach is underpinned by continuous active monitoring of the evidence and includes in the analysis any important new study or data that becomes available. These analyses are especially relevant for COVID-19 therapies but are also relevant for any novel intervention where rapidly emerging evidence may change policy or practice decisions.

Uncertainty in comparative effectiveness arises from other sources as well. For instance, clinical trials frequently use surrogate endpoints as an alternative to the true clinical outcomes of interest to study efficacy. In oncology, for example, progression-free survival is used as a surrogate for overall survival for regulatory approval. This makes it challenging to assess a therapy’s clinical value when results in terms of the outcomes most meaningful to decision makers and patients are not (yet) available, and the relationship of these endpoints with the surrogate endpoint is uncertain.

The trade-off between obtaining earlier results regarding surrogate endpoints and waiting for results regarding the true clinical outcomes of interest is a key issue, but not exclusive to value assessment. Debates around the FDA’s accelerated approval process are centered around this very issue as well. COVID-19 vaccine studies have illustrated the importance of collecting multiple endpoints (e.g., any infection versus severe disease, hospitalization, or death) to understand value. If a new intervention is likely to be of clinical value based on the extrapolation of surrogate endpoints, further data collection regarding clinical outcomes is necessary to reduce uncertainty and corroborate value.

Widely-acknowledged issues with representativeness and diversity in clinical trials also affect value assessments of new interventions. Study populations in clinical trials designed for regulatory purposes often do not reflect the diversity of the real-world target population, and they therefore cannot provide evidence needed to understand the heterogeneity of treatment effects—how the effectiveness of therapies varies based on patient characteristics—across all relevant subgroups of the patient population. Whether interventions are likely to reduce or exacerbate existing health disparities has become an increasingly important question in the course of the COVID-19 pandemic. Unfortunately, making such an assessment is very uncertain, if not impossible, when minority populations are not represented in the pivotal studies.

Efforts to promote increased research participation and diversification of trial enrollment are critical to advancing health equity writ large, including in value assessment, but such efforts will take time to bear fruit. In the meantime, we cannot afford to ignore the potential equity implications of uncertainty-driven biases simply because we do not have the evidence. On the contrary, value assessors and HTA bodies should be held accountable for the explicit consideration of this uncertainty and its implications for the equity effects of policy and coverage decisions.

Of course, RCTs are not the only source of clinical evidence, and observational data collected in the routine care of patients can help to address limitations in the evidence available at the time of approval, including but not only patient representativeness. The evolving evidence base for vaccines and possible treatments of COVID-19 has clearly demonstrated the relevance of real-world evidence: real-world tracking of vaccine effectiveness is an obvious example. Although for a new intervention routine practice data will of course not yet be available, the observational data from established competing interventions may sometimes inform what to expect from the new therapy when used in routine practice. Although there are significant limitations to using non-randomized real-world data to estimate relative treatment effects, this strategy can, if used sensibly, supplement trial evidence to help estimate subgroup effects. To better inform value assessments, we should incorporate these data into comparative effectiveness analyses more frequently despite the more labor-intensive and less straightforward analytical requirements of such a synthesis.

Uncertainty In Cost-Effectiveness

Value assessments of new interventions are not only based on comparative effectiveness, but also on cost effectiveness. For a cost-effectiveness analysis, we need credible estimates regarding the expected long-term outcomes and costs of alternative interventions. With this information, we can assess whether spending on the new intervention provides good “value” relative to current spending.

Unfortunately, there will never be an empirical study that provides all the required information directly. As an alternative, we use decision trees or simulation models combining multiple sources of relevant evidence according to mathematical equations to estimate long-term outcomes and costs. These strains of evidence focus on the natural course of disease, impact of treatment, the relationship between short- and long-term outcomes, quality of life, life expectancy, and resource use over time. Uncertainty in the different sources of evidence used to parametrize the model—parameter uncertainty—is typically included in the model-based simulations to quantify the degree of uncertainty in model output. This is important because it sheds light on how confident one can be that a decision to shift resources to the new intervention from standard care will improve health.

However, structural uncertainty, the uncertainty that arises from the mathematical assumptions underpinning the simulation model, is frequently ignored. As a result, we may underestimate the uncertainty in our estimates of cost-effectiveness. Or worse, we may be completely wrong due to a false assumption. The widely different predictions of how the COVID-19 pandemic would develop over time according to different simulation models illustrate the importance of considering alternative modeling assumptions and incorporating this structural uncertainty into evaluations of economic value.

Communicating And Conveying Uncertainty

Although uncertainty in estimates of comparative- and cost-effectiveness may be reduced through more studies and more thoughtful use of existing evidence, it cannot be eliminated. Reactions to preventive measures and potential treatments during the COVID-19 pandemic have made it clear that researchers and media have done a poor job of conveying the magnitude and meaning of uncertainty in the results of clinical studies and research models.

Uncertainty is difficult to interpret, with people naturally drawn to the apparent certainty of single numbers. Summaries to inform non-expert stakeholders about the findings of a study frequently deemphasize uncertainty and technical considerations. However, this may give a false certainty that can be especially problematic if implications change as new studies become available. COVID-19 public health measures and recommendations, which necessarily changed over time as evidence on the pandemic trajectory grew, were often presented as definitive. While presenting the changing guidance as certain may have been meant to ensure public acceptance, it also led to widespread confusion and distrust of public health officials who appeared to “get it wrong.”

We should do a better job communicating the uncertainty of the value of new interventions and potential implications for healthcare decision-making. It is key to explain to non-expert stakeholders that the scientific endeavor implies that conclusions regarding the value of a novel intervention may change when additional evidence becomes available. 

Recommendations For An Uncertain Future

The COVID-19 pandemic has highlighted common areas of uncertainty associated with evaluating the clinical and economic value of novel interventions. It has forced us to reconsider how evidence is generated, the analytic methods underpinning value assessment, and the communication of findings to make value assessment more informative.

Increasingly, the evidence base for new interventions poses challenges for value assessment. Going forward, we offer the following recommendations. To inform timely health care decision-making, we need a more dynamic approach to value assessment, characterized by living evidence synthesis and living health economic evaluations. This is especially true for interventions where the evidence base at approval is uncertain but likely to develop rapidly. Additional evidence considered in such a synthesis should include studies with longer follow-up, additional endpoints, and better representation of minority populations, including well-designed observational studies.

In addition, economic models should include both parameter uncertainty and structural uncertainty to fully characterize the implications for decision-making. Reports summarizing the clinical and economic value should not only highlight the degree of uncertainty in the findings, but also provide explicit recommendations about additional data needs to make evaluations more informative.

It is also essential that gaps in evidence regarding treatment effects in minority populations be formally integrated into the framework used to determine the value of new interventions.

Finally, relevant stakeholders should develop a data infrastructure that includes freely available data sources which may help alleviate uncertainty, such as long-term registries and patient-representative databases. Better ways to convey the sources and magnitude of uncertainties in health technology assessments to different stakeholders should be developed.

Ultimately, we can only be certain that we are making appropriate policy decisions given the available evidence when the uncertainty that is inherent in that evidence is acknowledged in the analyses informing those decisions. Transparency and clear communication are essential to ensure trust in value assessment among all stakeholders.  

Laisser un commentaire