ACSH Explains: Measuring Particulate Matter and Health

The “pivotal regulatory science” used in setting air pollution standards are epidemiological studies measuring the effects of particulate matter on our health. The recently proposed changes to improve the transparency of regulatory science at the EPA have brought these studies to the fore.

The “pivotal regulatory science” used in setting air pollution standards are epidemiological studies measuring the effects of particulate matter (PM) on our health. Those studies have been controversial, and the recently proposed changes designed to improve the transparency of regulatory science at the EPA have brought them to the forefront. To be a better-informed citizen, ACSH looks at the difficulties attendant to studying the air around us.

Particulate matter includes all the solids and liquids suspended in the air, a varying combination of particle types, sizes, and sources. That combinatorial complexity creates the first problem in air pollution science, describing the air. The most salient descriptor has been size, on two counts. First, size helps in understanding how the particles get into and are blown about in the atmosphere. Second, particulate size has an impact on how deeply they enter our lungs, and the defenses our body natural brings to bear in removing them. Because it is the particles’ movement we are most interested in, size is characterized by their aerodynamic size, which takes into account both their weight and shape.

Coarse particles, PM10, are the largest of the particles and are created when still larger particles, like soil, pollen, dust, even sea spray, break up and enter the atmosphere. Finer particles, PM2.5, are formed by gases produced directly through combustion, or indirectly, as a result of mixing with gases already present. PM10 and PM2.5 come from a variety of natural and human sources, but as a generalization, PM2.5 levels are anthropogenic, the result of human activity.

The underlying human source of PMs means that the relative amounts of PM10 and PM2.5 vary spatially, dependent upon our activities and location. When building, we create dust, tilling the soil, more dust, and pollen. Put the road next to the shoreline, and you can add an element of sea spray — all PM10s. Fertilize the fields, and you can add a dollop of ammonia from the nitrogen contained in the fertilizer, a PM2.5.; the tractor’s engine releases more vaporized products, like sulfur and nitrogen oxides. These admixtures can be surprisingly complex. The coloration of the atmosphere in the Blue Ridge Mountains is a result of the natural release of terpenes, a volatile organic compound (VOC), interacting with a low level of naturally occurring ozone and water, to form tiny particles that scatter light while reducing forest evaporation. A charcoal campfire will add very little to the PM2.5 levels in this situation. The same charcoal fire, used in India or China as a fuel source for cooking, will be the primary source of indoor PM10 and PM2.5.

One final factor increases the difficulty in describing air pollution; the air is not static and carries some PM10 and PM2.5 to new spatial locations. PM10 being larger, heavier particles tend to settle out short distances downwind of their source, while PM2.5 being smaller can remain aloft for more extended periods and travel hundreds of miles. And the movement of these particles is also affected by rain and temperature. The atmosphere’s composition of particulate matter is highly variable across spatial location and time; this heterogeneity, variability, makes it difficult to calculate individual exposure, other than in general terms.

Perhaps someday we might be able to use a Fitbit like device to measure our exposure to PM, but not now. Scientific studies must collect samples from which we make statistical inferences, using assumptions to simplify that task. The earliest pivotal science, e.g., the Harvard Six Cities Study, made use of the data from one monitoring station within the community to determine exposure. But since PM is affected by both human activity and the particular geography, it is all about the location, location, location of those monitors. Put it next to a highway, and you get one result, near the ocean or agricultural area, another set of values. Earlier scientific models incorporated location to some degree but did not account for wind, rain, and temperature with the sophistication of our more recent models. The sensitivity of our statistical modeling has improved over time, making much of the earlier science, not so much incorrect, as less precise. That makes it tough to compare the early studies used in “pivotal regulations” to the scientific findings of today.

Assumptions further simplify our exposure models. A frequent assumption is that outdoor measurements faithfully reflect indoor conditions. Technologic improvement in exhaust systems has greatly diminished outdoor PM; architectural changes have created more closed spaces, so in many instances, indoor levels differ significantly from outdoor measurements. Additionally, to determine exposure, you have to account for where individuals spend their time breathing. Occupational exposure is not limited to, coal mines, construction, or agriculture. Individuals working in nail parlors have more exposure to those previously mentioned VOC, a type of PM2.5. For women in India and China, the use of charcoal fires for cooking changes their exposure dramatically.

If these were not enough variables, we must also consider the changing PM levels over time. There can be short-lived alterations, say when the nearby road is being paved, or a utility has to fire up its plant to compensate for a cold day. And there can be what has been called long-term, but are intermediate timeframes, throughout the day or week, e.g., the recent forest fires in California and the planting seasons in the mid-West come to mind. To account for these temporal disparities, PM levels and exposure standards are different for short-term and long-term exposure. [1]    

Regulatory air pollution studies generally combine an idealized sophisticated model of atmospheric PM with a simplified model of human exposure -  much of the controversy centers around the uncertainty of both the model and exposure. There are two general approaches to gauging uncertainty, confidence levels, and sensitivity analysis. Statistically derived confidence levels state we are 90 or 95% certain that our result lies within a range; greater uncertainty creates a broader range. Sensitivity analysis looks at how varying the model’s inputs affect the results. If changing an input like PM levels, does not significantly change the model’s findings, the uncertainty of the PM level has little effect on the model’s certainty. If the model’s results are very sensitive to changes in the PM levels, then the imprecision or uncertainty of the PM levels makes the results more ambiguous.

Air pollution studies, like genetic studies, are dependent upon context, e.g., the local sources of PM, regional geography, and climate. Results do not necessarily translate from one region to another, let alone between countries. When these studies focus on health risk, another uncertainty is introduced, humans and their behavior. Human vulnerability to PM is variable, affecting the young and elderly more than others. And that vulnerability interacts with behavior, like a history of smoking or occupational exposure that has resulted in chronic obstructive pulmonary disease or even socio-demographic measures, poverty is more frequently associated with geographical sources of PM, i.e., high-density housing along highways.

All models are wrong

By their natures, models are simplifications of the real world and cannot help being wrong to varying degrees. I have tried to suggest ways in which models of air pollution and its effect on our health are inaccurate. That does not mean that we should ignore them or wait until our technology has progress to overcome the flawed measurement. It just means that we should be a bit more humble in our actions. There will never be a definitive threshold when everyone is well; it will always be a tradeoff between the perceived risk and the anticipated cost of remediation. When regulatory agencies judge that tradeoff, we introduce one last variable, the politics of special interests. Creating policy is a human activity, ideally based on scientific evidence. But when evidence is itself uncertain, we can only hope that policy is scientifically informed and weighs our long-term welfare over any short term costs or gains.   

 

[1] According to the EPA, PM10 and PM2.5 are both 40-50% lower than our current regulatory standards, standards that are themselves far more stringent than those initially put in place in 1997.