A well-publicized paper on suicide rates by occupation might have produced faulty data. A re-analysis is underway, with the Centers for Disease Control and Prevention taking action.
The Centers for Disease Control and Prevention (CDC) is taking action on possibly false data reported in a highly cited paper on suicide rates stratified by occupational groups. Approached by the study authors that coding errors might have produced flawed results, researchers are undergoing “reanalysis” of their findings.
So says the published statement
“Recently, MMWR Editors were informed by the authors of “Suicide Rates by Occupational Group — 17 States, 2012” (1) that some results and conclusions might be inaccurate as a result of coding errors for certain occupational groups. The authors are undertaking a thorough reanalysis of the data. This notice is to alert readers about the coding errors while the reanalysis is conducted to assess the validity of results and conclusions in the publication.”
Further comments by Courtney Lenard, a spokeswoman for the CDC injury center, were reported by ABC News
"CDC is committed to quality and is working to meet our usual standards for excellence. The scientific authors are working diligently to reanalyze and publish corrected data. We apologize for the errors in the report. Suicide is a serious public health problem that can have lasting harmful effects on individuals, families, and communities"
What did the original paper say?
Among the many reported statistics the original paper presented were the following
“Persons working in the farming, fishing, and forestry group had the highest rate of suicide overall (84.5 per 100,000 population)...followed by workers in construction and extraction (53.3), and installation, maintenance, and repair (47.9)... and among males (90.5); the highest rates of suicide among females occurred among those working in protective service occupations (14.1). Overall, the lowest rate of suicide (7.5) was found in the education, training, and library occupational group.”
In their conclusions, the authors prioritized prevention strategies targeted toward these populations.
Why is “reanalysis” of such coding errors meaningful?
If the findings are deemed faulty due to coding errors, then the limited resources that exist might have been directed to the least as opposed to the most at risk populations. As the authors list their findings of occupations in peril, they continue by speculating on the respective culprits in a profession that could be contributing factors to suicidal ideation. This, if wrong, could have created a host of negative chain reactions.
Additionally, much of their information was analyzed from the
“CDC’s National Violent Death Reporting System (NVDRS)” which “collects information on violent deaths, including suicides, from multiple sources, including death certificates, coroner and medical examiner reports, and law enforcement reports, to monitor trends, understand violent death characteristics and risk factors, and inform prevention efforts.”
Death certificates, for example, are routinely not uniform and can be more thorough in one region than another and when completed by one person versus another - they often don’t tell an entire story and their use is imperfect. Denominators were selected via use of the U.S. Census Bureau’s Current Population Survey March Supplement which asked about the person's primary occupation over the previous calendar year. This does not reflect the cumulative impact of prior life events, jobs or careers etc.
In the original paper, the authors speculate considerably on causes based on the data they selected which doesn't do much to address confounders. They drew conclusions from that collected information that impacted public health policy.
If, upon reanalysis, the work is deemed unsound, then the prior publicity of it and measures implemented as a result cannot be undone. However, the overall bottom line is this is quite a commendable position the authors and CDC are engaging in, especially given the high stakes of the subject matter. This should be the standard: results were determined potentially flawed, the data is being reexamined, the reality and process underway was made public expeditiously and the outcome will be once further elucidated. That’s good science. And, hopefully, there will be lessons learned on how to avoid a recurrence and even how best to analyze and collect data to yield more accurate, informative conclusions.