Why Isn’t Evidence Based Policy More Common?

Who doesn’t believe evidence should guide our decisions? Basing care and policy on evidence seems the best approach intuitively but why then is it so difficult to accomplish? A commentary in Public Health Reports offers a reason 

“Public health practitioners and policymakers want to know if an intervention works, how it works, if it is appropriate and acceptable for their clients and constituents, and if it is cost effective.”

But there is little reliable evidence for policymakers to use - while research and policy goals might be the same, but their agendas appear to be out of synch.

The Gold Standards

Randomized controlled trials (RCTs) are the gold standard of the scientific method, isolating variables and influences, considering cause and effect models, producing statistical results. Unfortunately, these are expensive undertakings especially when you are looking for changes that may take years to appear, like the value of exercise or the effect of sin taxes on our health rather than our behavior. As a result, most of these studies use “groups of individuals that differ from the target population, tend to be too short to assess long-term effects, are not always able to identify variation in effects across subgroups …”

If RCTs are the gold standard, systematic reviews and meta-analysis are a gold-plated alternative type of evidence to consider. But you already know their weaknesses, bias introduced by the selection of papers to review, the apples and oranges problem of trying to compare different study designs as well as “interventions that are complex, variable, and context dependent …”

Alternatives

The authors provide alternative study designs that answer questions of implementation that policymakers and practitioners need. They include:

Pre-post studies where a population is examined before and after an intervention, e.g., the purchase of soda before and after the institution of a tax. Unfortunately, these studies do not provide information on the same population if no intervention occurred. While these studies are an interventionalist’s delight, for those of us in the trenches it may over promise and under deliver. You never really know whether the behavior change was a result of the intervention or would have occurred on its own. Again many of the sugar and tax articles try to provide a “control” group by looking at the population in neighboring areas, but then there are issues of assuring that the two groups are similar enough and not “confounded” by hidden variables. 

A more practical suggestion is “randomized encouragement” where the study group all has the same information, but some are encouraged while others left to their own devices. If nothing else it will give us information on the power of encouragement; for example, the value of daily weigh-in for weight reduction programs, or reminder calls to patients with heart failure or diabetes. 

Another useful approach is to take that hard-won RCT data and extrapolate the characteristics of its participants to real life populations – that would at least give us a ballpark estimate of the efficacy of the treatment for the community. If a population of 30 to 50 year-olds improved their health by 50%, and if these individuals made up 20% of the population, we might expect a 10% improvement in health overall. 

The authors end with a few recommendations. First, policy-makers need to learn how to judge evidence; it will come as no surprise that not all are conversant with evaluating scientific studies. Second, and I have to say I find this a bit cynical, they recommend that “more evidence needs to be shared in ways that are sensitive to the demands on practitioners’ and policymakers’ time, resources, and expertise.” You know pre-digested fact sheets, policy briefs – where the work of understanding has been reduced to binary yes-no options and where advocacy may be quietly hidden in the editing. Their most practical suggestion that researchers and practitioners communicate better to share their needs and information.

Source: Bringing Evidence to Bear on Public Health in the United States Public Health Reports DOI:10.1177/0033354918788879