Chemicals, Cancer and Common Sense

Risk Estimation:  It would come as a surprise to most people to learn that so-called quantitative cancer risk assessments do not actually quantify true risk.  The actuarial risk calculated by insurance companies (e.g., the risk of dying in an auto accident), is based on actual data on the incidence of disease-, disaster- or accident-related mortality.  However, actuarial data on disease incidence or mortality in humans resulting from very low exposures to chemicals in the environment simply do not exist.  Instead, data from occupational studies of more heavily exposed humans are used, where available.  However, epidemiological studies seldom have the quantitative exposure data needed for a quantitative risk assessment.  To create a surrogate for this missing low-dose data, animals are exposed to very high doses of substances to guarantee that statistically reliable results are obtained.  These animal results (or, much less often, the results from occupationally-exposed humans) are then extrapolated, using mathematical models incorporating numerous default assumptions, to humans exposed to very low environmental levels of the same substance.  The resulting cancer risk calculated by regulatory agencies is, of necessity, entirely theoretical and will vary greatly depending on the assumptions made and the models used in the assessment.  The limitations of quantitative risk assessment were clearly acknowledged in EPA's 1986 Guidelines for carcinogen risk assessment, as follows:

“The risk model used by EPA [i.e., the linearized multistage model] leads to a plausible upper limit to risk that is consistent with some proposed mechanisms of carcinogenesis.  Such an estimate, however, does not necessarily give a realistic prediction of the risk.  The true value of the risk is unknown, and may be as low as zero."  (U.S. E.P.A., 1986)      (emphasis added.)

Thus, EPA's cancer risk assessments, which were originally developed in the 1970's to facilitate and document regulatory decisions regarding cleanup standards and remedial actions at contaminated sites, were never intended to be used for the "realistic prediction" of actual health consequences. 

That animal models are imperfect surrogates for human beings is not a matter of debate in the scientific community, nor should the results of carcinogen classification by EPA, IARC, or NTP be interpreted to suggest otherwise.  Of approximately 600 chemicals evaluated by IARC, 34% were clearly carcinogenic in animals, but only about 10% of those were also considered by IARC to be human carcinogens based on "sufficient" evidence (Meijers et al., 1992).  This low correspondence was partially due to the fact that no adequate human data existed for almost 61% of the animal carcinogens.  However, of the 36 known human carcinogens, 15 (41.7%) did not show sufficient evidence of carcinogenicity in animals (Meijers et al., 1992).  In many, if not most, cases, the high rate of positive results (30-50%) in cancer bioassays has less to do with the intrinsic carcinogenicity of the chemicals being tested than it does with the magnitude of the doses used, typically in the range of 1/4, 1/2, and one times the Maximum Tolerated Dose (MTD).  About 40% of all animal carcinogens tested may cause cancer by promotion mechanisms which can be expected to exhibit relatively high thresholds.

The Zero-Threshold Concept:  Most regulatory agencies use models of chemical carcinogenesis that incorporate the zero threshold policy, i.e. the assumption that there is no dose so small (other than zero) that it could not cause cancer in someone, somewhere.  (The Linear Multi-Stage or LMS Model is the best known example.)  Contrary to popular belief, this zero-threshold policy for carcinogens is just that, a policy, and not an established scientific principle.  In fact, it runs contrary to the most fundamental principle of the science of toxicology, which, in abbreviated form, may be stated as "THE DOSE IS THE POISON".  (Physicians and microbiologists, respectively, implicitly recognize the same pharmacological/biological principle in the form of the "therapeutic index" and the concept of virulence, respectively.)  In the late 1970s, the largest chronic bioassay ever undertaken (the so-called “megamouse” or ED01 Study) provided compelling experimental evidence of a threshold for chemically-induced bladder cancer in mice; even mouse liver carcinogenesis (which has an unusually high spontaneous rate in laboratory  rodents and typically  exhibits an apparently linear dose-response when the LMS model is used) exhibited a threshold when the data were re-evaluated by the Society of Toxicology using best-fit models that incorporated time-to-tumor data (SOT, 1981) instead of the Linear Multi-Stage Model used by FDA and EPA (Staffa and Mehlman, 1979). Since that time, it has become increasingly apparent, on the basis of mechanistic considerations alone, that many, if not all, animal carcinogens must exhibit thresholds, especially the so-called “promoters” and weak “initiators” with promoting activity (Williams & Weisberger, 1991, pg. 154).  These thresholds will typically exceed, by substantial margins, human exposure levels encountered outside of occupational settings.

The zero-threshold concept is based on (1) the historical observation that radiation-induced genotoxicity (i.e., DNA damage) exhibited no apparent threshold over the observed dose range and (2) the assumption that even a single DNA adduct or mutation has some finite chance of ultimately resulting in the formation of a full-blown malignancy.  The second assumption derives from the fact that sub-lethal genetic damage is hereditary and may be passed on to future generations of cells.  Thus, unlike cells suffering epigenetic or somatic damage, a single cancer cell has the potential of becoming a whole population of cancer cells.  Using the same logic, one might conclude that exposure to a single pathogenic bacterium may suffice to cause disease, since it can, in theory, divide until it becomes a large population of pathogenic bacteria.  However, just as the principle of threshold toxicity is fundamental to the science of toxicology, so is the principle of threshold pathogenicity (or virulence) fundamental to the sciences of bacteriology, virology, and immunology.  It takes a larger number of less "virulent" organisms to cause disease than it does for a more virulent organism....just as it takes more molecules (i.e., a higher dose) of a less toxic chemical to elicit a response than it does for a more toxic one.  Logically, then, there is no reason to expect cancer to behave any more “magically” than, say, diphtheria or the common cold.

However, even if the zero-threshold model of carcinogenicity were data-based and biologically plausible (and it is neither), almost half of the chemicals to which it is applied do not meet that concept's first and foremost criterion, i.e., that of genotoxicity (or, more specifically, mutagenicity).  Non-genotoxic carcinogens, which make up 40% or more of all chemical carcinogens identified to date, do not cause DNA damage, and commonly exhibit demonstrable thresholds (often quite high ones) with regard to cancer.  (Exceptions are potent promoters like dioxin and TPA that have an affinity for specific receptors.)   Examples of such epigenetic carcinogens include: arsenic, asbestos, benzene, 2,3,7,8-TCDD (“dioxin”), saccharin, phenobarbital, DDT, PCBs, certain food additives (e.g., BHA and BHT), various chlorinated solvents (including TCE, PCE, and chloroform), various hormones (e.g., estrogens and androgens), certain chemotherapeutic agents (e.g., Tamoxifen), iodine (i.e., deficiency), some ulcer medications (e.g., omeprazole and, probably, cemetidine), thioureas and other goitrogens (as may be found in cabbage), common drinking alcohol (ethanol), and even table salt (or, rather, the sodium ion) (Williams and Weisburger, 1991).  For genotoxic carcinogens, simple, reversible DNA damage (measured as the number of DNA adducts) may well exhibit no measurable threshold, but mounting evidence suggests that the more complex processes of mutagenesis and, especially, carcinogenesis will (Williams & Weisberger, 1991, pg. 154; Cunningham et al., 1994; Pitot and Dradan, 1996).  The thresholds of very strong genotoxic carcinogens may be too low to be determined in ordinary bioassays, but they too almost surely exist because, although complete carcinogens may be able to initiate cells (i.e., cause mutations) at very low doses, they will not be able to sustain the remainder of the multi-stage process of carcinogenesis (Pitot and Dragan, 1996).  (Promotion is typically a relatively high-dose, non-genotoxic phenomenon that often involves cytotoxicity, cellular proliferation and inhibition of apoptosis or “programmed cell death”.  Since cells in the promotion stage are dependent on continual exposure to the promoting agent, promotion, unlike initiation, is reversible with the removal of exposure.)

Even if one were to accept the proposition that there is a finite probability that a single molecule of a carcinogenic chemical could cause cancer, practical thresholds would still have to exist.  At low enough concentrations, for example, the time required for cancer to develop (the so-called latency period), which is inversely related to dose, would exceed a human lifetime, i.e., the person would die of something else before the chemically-induced cancer could ever develop.  Another practical "threshold" is based on the limits imposed by population size.  At sufficiently low doses, the theoretical probability of effect would be so small (e.g., one in 10 billion) that the entire human population would (statistically speaking) be too small to express even one causally-related case of cancer in a human lifetime.  Such population-based practical thresholds will be the rule, rather than the exception, at most contaminated sites where the potentially-exposed populations number only a few thousand and the estimated (i.e., theoretical) “risks” are generally in the range of 1 in 10,000 to 1 in a million or less.  In either case, the limiting concentration (i.e., the practical threshold), if it could be determined experimentally, would be indistinguishable from a “true” biological threshold. 

If the foregoing logical arguments are not convincing enough, then one need only consider that the existence of personal thresholds is a self-evident, empirical fact.  (Everyone who smokes does not necessarily get lung cancer, just as everyone who is exposed to influenza virus does not automatically catch the flu.)  The virtual inevitability of "true" effect thresholds for carcinogens derives from the fact that the human body has multi-layered defense mechanisms against cancer formation, including: metabolic detoxification and excretion; sequestration of toxicants in depot tissues; “suicide” reaction with scavenger molecules (instead of DNA); repair of damaged DNA; apoptosis or programmed cell death; and immunologic surveillance.  These defenses are much more effective in humans than they are in rodents and other shorter-lived animals (a fact that is generally ignored by zero-threshold models).  The effect of these multiple tiers of defense mechanisms is to reduce the chances that a) DNA will be damaged, in the first place, and b) that DNA damage, if it occurs, will be translated into a mutation, and c) that mutations, if they occur, will yield a living abnormal cell, and d) that a cancer cell, if it occurs, will multiply and establish itself as a neoplasm.  Only if all of these obstacles are overcome before the individual dies of something else is it possible for carcinogen exposure to actually result in cancer. In other words, the same argument that explains the inevitable existence of thresholds for non-cancer adverse effects and provides the foundation for the generally accepted maxim ATHE DOSE IS THE POISON@ applies equally well to cancer effects..

A simple military analogy illustrates the point.  Assuming that a force of 10,000 marines would be required to take a well-defended beach in a single day, few would argue that the same objective could be won by "storming" the beach daily with but a single marine each day for 10,000 consecutive days.  Obviously, a force of one would be too weak to overcome even the first line of defenses.  And even if the marine did, by some miracle, succeed in causing some damage, the defenders could easily repair or replace any damaged materiel in time to greet the next day’s suicidal assault.  Similarly, at sufficiently low doses, a carcinogen will be unable to cause any clinically significant damage, because either a) it is neutralized before it can do any damage at all or b) the little damage that it does do is quickly and easily repaired. 

A Threshold model for Chemical Carcinogenesis:  Interpreted within the context of the zero-threshold assumption, the term “latency” usually implies that, given enough time, a cumulative effect will be produced by a series of individually sub-threshold doses, i.e., that each and every individual daily dose, no matter how small, effectively contributes to the cumulative dose which, it is assumed, actually causes the observed effect.  However, in order to have a genuine cumulative effect, it is only logical that something must, in fact, accumulate.  Either the individual sub-threshold doses must accumulate in a tissue until threshold levels of the toxicant are reached and surpassed, or else sub-clinical effects must accumulate until clinical significance is attained (e.g., cirrhosis of the liver in alcoholics).  The first case requires bio-accumulation of the toxicant. However, fat soluble substances like dioxin and PCBs are the ones that most often bioaccumulate, and sequestration in a storage depot like fat actually reduces risk by keeping the potential toxicant away from its target cells. The second implies that the individual doses exceeded the thresholds for the sub-clinical effects.  But what if the carcinogen (e.g., ethylene oxide or benzo[a]pyrene) does not bioaccumulate and the individual doses are at or below no-effect levels?  By what magic, then, would cumulative effects be produced under these most common of circumstances, i.e., in the absence of anything to accumulate?  Therefore, notwithstanding EPA’s 1986 policy statement to the contrary, the risk model used by EPA does not lead to a “plausible” upper limit to risk, because proposed mechanisms of carcinogenesis i.e., the assumptions) that underlie the model are not plausible, either.

Adherents to the zero-threshold model usually make no effort to address this question.  And, when faced with experimental data that appear to demonstrate the existence of thresholds for tumor formation, they typically respond by claiming that an increased frequency of cancer would, in fact, have been detected, if only a large enough population of animals could have been exposed.  This argument was even made when the ED01 study, which employed over 24,000 mice, demonstrated a relatively high apparent threshold for 2-acetylaminofluorene-induced bladder cancer.  Used in this way, the zero-threshold concept is less a scientific theory than an unshakable belief system protected by the logical impossibility of proving a negative. (In this case, the negative requiring proof is that there is NOT a number of animals so large that, no matter how small the chronic dose, an increased frequency of tumors would be detectable). 

As a result, any additional experimental effort (like the “megamouse” study already mentioned) to directly prove or disprove the zero-threshold “hypothesis” would, in this author’s opinion,  be “barking up the wrong tree”.  A more fruitful course would be to devise experiments designed to directly demonstrate that traditional threshold principles adequately explain, even to the satisfaction of objective zero-threshold enthusiasts, the observed biology of cancer.  (This author would argue that this has already been done, but considers that even more convincing experiments can be performed.) 

However, outside of U.S. regulatory agencies and those public health officials who have mistakenly assumed that regulatory methodologies were applicable to the prediction of actual human health risks, there has never really been any serious scientific question that thresholds do, in fact, exist for carcinogenic effects, as well as non-carcinogenic effects.  The possibility that they do not exhibit thresholds has been maintained by a combination of political pressure, and the deceptive practice of plotting dose in gravimetric terms on an arithmetic scale with zero at the origin, which  creates the false impression that “zero” dose is much closer than it actually is.  The existence of thresholds for carcinogens becomes inescapable when one simply converts the dose to number of molecules and plots it on a logarithmic scale, beginning with one molecule, the lowest possible non-zero dose of any carcinogen.  (See the work of K. K. Rozman and W. J. Waddel.)  Pick any data set that you like, then plot it on the Rozman Scale, or something similar, and it will be impossible to argue that the resulting dose-response curve might plausibly be extrapolated to the origin (i.e., zero- dose/zero-effect).  What you typically get is a straight line that plummets precipitously toward its intersection with the x-axis roughly 18 orders of magnitude higher than the lowest possible dose.

The observation that cancer is primarily a disease of old age is also consistent with the view that cancer is a threshold effect.  Since virtually all of the body’s defense mechanisms decline in efficiency with advancing age, it is inevitable that our personal thresholds for disease (including cancer) will also decline with age.  (Due to biological variability in the factors affecting resistant and susceptibility, these personal thresholds for adverse health effects will be different for every member of a population.)  As an organism’s defense mechanisms become less effective with age, chronic exposures that were previously innocuous will become more effective with age.  (This is especially obvious among the old who may actually die from the complications of an infectious exposure that would have had little or no effect on a younger person.) 

In other words, for any given chemical, be it carcinogen or non-carcinogen, age-specific, personal thresholds of effect will exist, reflecting the age-specific balance between the dose of the toxicant and the efficiency of defense mechanisms at the time that dose is administered.  Because sufficiently high doses of a carcinogen can overwhelm the body’s defenses even at their peak of efficiency, they can also cause cancer in relatively young animals. On the other hand, very low chronic doses can have no effect at all until the animal is sufficiently old for its defense mechanisms to have become ineffective against those lower doses.  It is, therefore, to be expected that, at lower and lower doses, tumors will become fewer in number and appear later in the animals’ lives, until finally, tumor incidence becomes essentially indistinguishable from the background tumors of old age, at doses that are themselves indistinguishable from NOAELs or thresholds of effect.    

Thus, while regulatory risk assessors have historically considered that the observation of the cancer latency with low chronic doses of chemical carcinogens was compatible with the notion of zero-threshold, it is far more likely that it reflects just the opposite, i.e., the existence of personal thresholds that decrease with advancing age. Childhood cancers are often explainable in terms of genetic predispositions that either reduce the number of steps needed for the formation of tumors (e.g., retinoblastoma) and/or reduce the normal age-dependent efficiency of the body’s defense mechanisms (e.g., xeroderma pigmentosum and defective DNA repair).

The major advantages of this threshold model of carcinogenesis over the zero-threshold model of carcinogenesis are that 1) it relies solely on established principles of pharmacology and toxicology without the need for counterintuitive assumptions, and 2) it is more readily testable by experiment.

The Causes of Cancer:  Almost everyone has heard, at one time or another, and in one form or another, a statement to the effect that "approximately 80% of all cancers are caused by environmental factors".  This statement, or something very much like it, was originally made by Dr. John Higginson at a 1968 International Cancer Conference in Israel.  In its original context, the adjective "environmental” was properly understood by Dr. Higginson's scientific audience to mean all factors excluding those related to heredity (Gots, 1993).  However, sometime in the late sixties and early seventies, the phrase "environmental factors" became translated in the media and in the public mind as "environmental chemicals".  This widespread misconception grossly distorts popular estimates of cancer risk for people in industrialized societies.

Several investigators have attempted to identify in a semi-quantitative fashion the causes of cancer in humans (Higginson and Muir, 1979; Wynder and Gori, 1977; Higginson, 1968).  The classic treatment of this subject is "The Causes of Cancer" by Doll & Peto (1981).  The findings of Doll and Peto, two of the world's leading epidemiologists, have been widely quoted in the scientific literature and still represent the best estimates available in the U.S.  According to Doll and Peto, approximately 30% of all cancer deaths are attributable to tobacco, 35% to diet, 7% to reproductive & sexual behavior, 4% to occupation, 3% to alcohol, 3% to "geophysical factors (e.g., ionizing radiations and UV light), 2% to pollution, 1% to medicines and medical procedures, <1% each to food additives and industrial products (e.g., detergents, hair dyes, plastics, paints, polishes, solvents, etc.), and perhaps as much as 10% to infection.  Thus, perhaps 75% of all cancer is attributable to the "lifestyle" factors of smoking, drinking (alcohol), diet, and sexual behavior, and very little may be attributable to environmental pollution, as the public understands that term.  Even these modest estimates of pollution-related cancer (2%) are highly speculative, being based as they are on high-to-low dose extrapolation from occupational studies.

The Cancer "Epidemic":  Another popular misconception relates to the perception of a booming cancer "epidemic" that began with the industrial revolution and continues to grow today.  In fact, according to an update from the National Cancer Institute (1988), "the age‑adjusted mortality rate for all cancers combined except lung cancer has been declining since 1950 for all individual age groups except 85 and above".  (The latter group saw a mere 0.1% increase.)  Decreases in cancer mortality during this period have been due primarily to decreases in stomach cancer (by 75%), cervical cancer (by 73%), uterine cancer (by 60%), and rectal cancer (by 65%).  Increases have been primarily from lung cancer (due mostly to smoking rather than to modern industrialization) and non‑Hodgkin's lymphoma (by 100%).  The increased incidence of some cancers may be due primarily to smoking and natural dietary factors such as fat.  However, some apparent increases may actually reflect increases in registration of cases and/or improvements in diagnosis. 

Since cancer is primarily a disease of old age and our population is getting older (i.e., people are living longer), it is inevitable that the incidence of cancer, in absolute terms, will increase.  However, when age-adjusted rates are used instead of raw numbers, most of the apparent increases disappear, leaving no persuasive evidence that environmental pollution has contributed significantly to human cancer rates.  Any theoretical increase in cancer “risk” that might be associated with life in modern society must be balanced against very real health benefits, including reduction of exposure to natural carcinogens in damaged crops and spoiled food, which are much more abundant in the environment than are "unnatural" ones.  More immediately apparent, however, is the much greater reduction in death due to causes other than cancer.  Given such substantial benefits, few people would suggest, for example, that all medicines and medical procedures should be banned because of speculation that they may be responsible for 1% of all cancers.  After all, without modern medicine, many of us would not live long enough to get cancer.

In a very real sense, cancer is as big a killer as it is today in the U.S. precisely because so many Americans do not die of something else first.  Mortality due to cardiovascular disease and cancer are also substantial in the developing world, but are surpassed by deaths from infectious and parasitic diseases and lower respiratory infections, respectively (WDR, 1993).  Many of the developing world's major health problems, including diarrheal diseases, pneumonia, tuberculosis, measles, malaria, and malnutrition have been largely eliminated or controlled in the U.S. by chlorination of public drinking water, the use of common medicines and vaccines, and the use of agrochemicals to guarantee a safe, adequate food supply.  And yet, none of these invaluable contributions to public health is 100% safe.  (Nothing in this world is.)  But the benefits are real and substantial, while the proposed cancer “risks” are largely theoretical and relatively insignificant by comparison.  Because cancer is primarily a disease of old age, eliminating all cancer as a cause of death, while desirable, would actually not extend the average human lifespan by much more than a year or so.  The elimination of childhood cancers, however, could add many decades of life to individual children, and should be a national priority.

Criteria for Causation in Epidemiological Studies:  According to the Centers for Disease Control, 23.9% of all deaths in 1992 were due to malignant neoplasms (MVSR, 1995).  Thus, cancer was the second leading cause of death, after heart disease which accounted for 33% of all deaths. Against such high background rates, locally elevated rates of mortality due to cancer cannot be causally attributed to low-level environmental chemical exposures with any confidence at all, unless several conditions (called Hill’s Criteria for Causation) are met.  At the very least, (1) the exposure must have preceded the onset of the disease, (2) the rates must be high enough to mitigate against chance as the source of the observed variation and, (3) all other known causes or contributors to the effect ("confounding factors") must be ruled out or adjusted for.  The case for causation is further strengthened if (4) the proposed connection between exposure and disease is a biologically plausible one, (5) the health effect is observed to increase with increasing exposure (i.e., exhibits a “dose-response relationship”), and (6) the observations are consistent with those made by other independent investigators under similar conditions (i.e., they are reproducible).  Considering the time and expense required to resolve all of these issues in a real world setting, it is not surprising that epidemiological studies rarely satisfy all or even most of these criteria.  Causation is much easier to establish in the laboratory where variables are more easily controlled.  Hence, the practical necessity of risk assessors’ inordinate reliance on experimentally-derived animal data.  Of course, one is then faced with the problem of extrapolating from observed effects in animals to potential effects in humans

Which brings us full circle.

This article appeared in the Spring 2016 print edition of Priorities magazine. 

 

                                                                REFERENCES    

Cunningham, M.L., Elwell, M.R., and Matthews, H.B. (1994).  Relationship of carcinogenicity and cellular proliferation induced by mutagenic non-carcinogens vs carcinogens.  Fundamental & Applied Toxicology. 23: 363-369.

Doll, B., and Peto, R. (1981).  The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today.  J. Natl. Cancer Inst. 68:1191.  Also available as a hardback from Oxford University Press.

Gots, Ronald E. (1993).  Toxic Risks: Science, Regulation, and Perception.  Lewis Publishers.

Higginson, J. (1968).  Distribution of different patterns of cancer.  Israel J. Med. Sci. 4: 460.

Higginson, J. and Muir, C.S. (1979).  Environmental carcinogenesis: misconceptions and limitations to cancer control.  JNCI 63: 1291‑1298.

Meijers, J.M.M., Swaen, G.M.H., Schreiber, G.H., and Sturmans, F. (1992).  Occupational epidemiological studies in risk assessment and their relation to animal experimental data.  Regulatory Toxicology and Pharmacology 16:215.

MVSR (1995). Monthly Vital Statistics Report, Vol. 43, No. 6, (Supplement), Mar 22, 1995.

Pitot, Henry C., III and Dragan, Yvonne P. (1996).  AChemical Carcinogenesis@. Chapter 8 in: Casarett and Doull’s TOXICOLOGY: The Basic Science of Poisons. (Curtis D. Klaassen, Mary O Amdur, and John Doull, Editors.) McGraw-Hill, pp 201-267.

Rozman, K. K., Kerecsen, L., Viluksela, M. K., O¨sterle, D., Deml, E., Viluksela, M., Stahl, B. U., Greim, H., and Doull, J. (1996).  A toxicologist’s view of cancer risk assessment. Drug Metabolism Reviews, 28, 29B52.

SOT (1981).  Re-examination of the ED01 Study, Fundamental & Applied Toxicology, 1:27-128. A journal of the Society of Toxicology.

Staffa and Mehlman (1979).  Innovations in Cancer Risk Assessment (ED01 Study),(Jeffrey A.Staffa and Myron A. Mehlman, Eds.), Pathotox Publishers, Inc., 1979. Proceedings of a Symposium sponsored by the National Center for Toxicologic Research and the U.S. Food and Drug Administration.

U.S. E.P.A. (1986).  Guidelines for carcinogenic risk assessment.  Fed. Reg., 51: 33992-34006, September 24, pg. 33997-8.

WDR (1993).  Burden of Disease.  The World Development Report of 1993.  World Bank.

Waddell, W. J. (1993). Human risk factors to naturally occurring carcinogens: Thresholds, inhibitors and inducers. Journal of Toxicological Science, 18, 73B82.

Waddell, W. J. (1996). Reality versus extrapolation: An academic perspective of cancer risk regulation. Drug Metabolism Reviews, 28, 181B195.

Waddell, W. J. (2002). Thresholds of carcinogenicity of flavors. Toxicological Science, 68, 275B279.

Waddell, W. J. (2003a). Thresholds of carcinogenicity in the ED01 study. Toxicological Science, 72, 158B163.

Waddell, W. J. (2003b). Thresholds in chemical carcinogenesis: What are animal experiments telling us? Toxicology and Pathology, 31, 1B3.

Waddell, W. J. (2003c). Threshold of carcinogenicity of nitrosodiethylamine for esophageal tumors in rats. Food & Chemical Toxicology, 41, 739B741.

Waddell, W. J. (in press). Comparison of human exposures to selected chemicals with thresholds from NTP carcinogenicity studies in rodents. Human Experimental Toxicology,

Williams, Gary M., and Weisburger, John H. (1991).  “Chemical Carcinogenesis”. Chapter 5 in: Casarett and Doull’s TOXICOLOGY: The Basic Science of Poisons. (Mary O Amdur, John Doull, and Curtis Klaassen, Editors.) Pergamon Press pp 127-200.

Wynder, E.L. and Gori, G.B. (1977).  Contribution of the environment to cancer incidence: An epidemiologic exercise.  JNCI 58: 825‑832.