In a type of contrition, Neal Barnard and his colleagues write about The Misuse of Meta-analysis in Nutrition Research, in this week's JAMA. I have written about meta-analysis previously, and I was happy to see Dr. Barnard cover some familiar ground. He points out that nutritional research is particularly challenged by meta-analysis because the methodology, populations, and comparisons differ. Additionally, dietary studies are rarely randomized clinical trials because of expense, size of the population necessary to demonstrate small improvements and because it is hard to blind people to what they are eating.
From a methodological point of view, it is difficult to quantify nutrition. Some days, when I am disciplined, no bread or pasta; other days, as I pass the Sullivan Street Bakery, a half of a loaf of bread can disappear. So how to measure what I eat? Researchers, even when they can relatively accurately quantify nutrient intake, have the problem that those amounts are not linear, I do not eat 0.43 loaves of bread. Researchers have to divide those amounts into groups, like quartiles (groups of four). When the meta-analysts seek to combine studies, these various nutritional measures become a source of (even) greater confusion. The combination of different measurements from multiple studies reduces the already-low informative value of the meta-analysis because you are mixing apples with oranges.
For meta-analyses to be even remotely valid, the population under study also needs to be similar. For example, looking at the calorie intake of active millennials versus more sedentary seniors may give you very different results when looking at rates of obesity. And while studies can be precise in describing the population studied, that description may not consider unknown variables that can make "similar" populations very dissimilar. For example, a study of vegetarians may not make a distinction between vegans and ovolactovegetarians (those who consume dairy products and eggs); their diets may differ substantially concerning calcium and cholesterol. Again, dissimilar populations reduce the informative value of the meta-analysis.
The authors also address the effect of substitution, when we eat less of one thing, we often eat more of another. The research around sugary beverages is a great example. As the amount of soda decreases, the quantity of juice drinking increases, particularly in the populations ‘at-risk.’ A study that reports a decrease in soda often will not consider the increase in an equally sugary juice that participants substitute for soda. The effect of not consideration substitution contributes to the studies first recommending one nutrient as good and then reporting that it is bad. For example, the incessant drumbeat of nutritional studies on the evil of butter leads to its replacement, margarine. The drums are beating once again as butter is redeemed and restored.
Finally, the authors consider the fact that when all of these other considerations are taken into account, some studies are just better than others. A meta-analysis that includes poorly performed and reported studies again reduces the value of its findings. It is true that in the case of a meta-analysis, you are only as strong as your weakest link.
They make several recommendations:
- Requiring review by editors with experience in meta-analysis and the subject matter
- Requiring authors to make sure that their primary article’s data is correct and reproducible
- Prioritizing meta-analysis of pooled data over pooled conclusions
- Conflicts of interest should be carefully scrutinized for both the meta-analysis and primary studies included.
Responsible peer-review should already be meeting the first two recommendations. Studies using open source pooled data are stronger statistically and can be more informative, but if there is space to fill, will the editor be as discriminating. Finally, this conflict of interest question. The authors write, no surprise, that industry studies are suspicious for these conflicts. But, in their words, “Even in the absence of commercial funding, bias is an important consideration… .” An absence of a financial conflict of interest does little to identify researchers whose bias is manifested in the articles chosen for the meta-analysis. That is perhaps the most significant ethical abuse and will not be readily corrected by changes in peer review.