From A Risk-Assessment Perspective, EPA Getting Rid Of 'Secret Science' Makes Sense

Science and Judgment in Risk Assessment - the “Blue Book”

Environmental Protection Agency Administrator Scott Pruitt’s recent announcement that EPA will not use “secret science” — that is science for which the underlying data is not available — is challenging. Whereas EPA is routinely in receipt of unpublished toxicity studies for chemicals designed for commerce, not all important scientific findings are publishable. Nor do scientific journals generally have sufficient space to include all data.

Much has been made in recent weeks of this new EPA policy, including an op-ed opposing it by former EPA Administrator Gina McCarthy and former acting Assistant Administrator Janet McCabe.

The media coverage has focused attention on how science is considered acceptable and useful in EPA’s rulemaking. But missing from this is the perspective of risk scientists charged with protecting public health. In the case of EPA, it is often not enough for any one positive study to be published in a peer-reviewed journal. Such work often needs replication because a positive finding occurs, on average, in one out of every 20 studies due to chance.

If a study cannot be replicated, then it at least needs to make sense within the pattern of available data. For pesticides regulated by EPA, these data are often from hundreds of studies done according to federal guidelines.

Studies that are not replicated or that do not make sense in an overall pattern are still considered, however. Risk scientists will often contact the authors to obtain additional information in order to conduct their own analysis, a common practice within EPA.

When such data are forthcoming, without the need to break confidentiality or disclose confidential business information, independent analyses can be conducted and the public health is better served. But when such information is withheld by the authors, government risk scientists are often left with a dilemma.

For example, imagine that a series of studies come out on a single human group that is exposed to a commonly used insecticide, and they show an unexpected effect at extremely low exposures. This finding has not been replicated and clashes with multiple animal and human studies that point to danger only at much higher exposures.

In this case, EPA scientists would ask the authors for the underlying data to confirm this unexpected low-dose effect. But let's say they can't get it. EPA is then left with neither confirmatory studies, nor information that makes sense in light of other studies, nor the ability to conduct its own analysis. Understandably, Pruitt has chosen a policy of not using such studies.

There is one sense in which McCarthy and McCabe are spot on. The judgment over which epidemiology and/or toxicology data to use for risk or safety assessment purposes should be left to risk scientists. But from my perspective as a risk scientist, Pruitt’s decision is still correct. The public’s interest is best served when science is replicable and consistent with other information. When studies cannot be replicated or are inconsistent with other information, access to their underlying data is vital to independent analysis. When the underlying data are not provided to a risk scientist, it is difficult to use this study to make a credible risk judgment, much less national rulemaking.

In short, the public is often worried about chemical exposure, as they should be when such exposure exceeds a safety level. But the public’s interest is best served by trusting in experts dedicated to public health protection, not by withholding scientific data from independent analysis.

This article is republished from the Washington Examiner. Read the original here.