BLOG: What a “Defective” Radiation-Risk Standard Teaches us About Improving Chem Risk Assessments

If EPA’s approach to chemical assessments didn’t have significant implications for the general public, perhaps people might be more inclined to cut them some slack

Wall Street Journal editorial board member Holman W. Jenkins, Jr. seems to have a knack for battling bad science – especially what he perceives to be misguided reporting and alarmist stories about climate change.

In his most recent piece, Jenkins laments the fact that some activists have used faulty research to overstate the risks associated with developing potentially transformative alternative energy technologies. He cites nuclear as a prime example.

In making his case against bad climate science, however, Jenkins brought up an issue that resonates with chemical manufacturers because of its importance to the way the U.S. Environmental Protection Agency (EPA) currently conducts chemical risk assessments.

The “linear no-threshold” model of risk

According to Jenkins, the Nuclear Regulatory Commission’s “linear no-threshold” (LNT) model of radiation risk, which he says has unfairly kept nuclear power low on the alternative energy priority list, has also contributed to keeping the EPA from being as accurate as it could when conducing chemical risk assessments.

That’s because the LNT model continues to be EPA’s default approach – both for chemicals that act in linear fashion and for those chemicals which scientific information shows do not act in a strictly linear way. In the latter case, different dosages can cause effects to change in ways that don’t always result in a straight line on a graph.

What that means is, while the LNT model isn’t entirely obsolete, neither is it the best tool to assess the effects of exposure to certain chemicals that aren’t directly proportional to the dose.

Why getting it right matters

If EPA’s approach to chemical assessments didn’t have significant implications for the general public, perhaps people might be more inclined to cut them some slack. But the fact is, the assessments – when they rely on default approaches over existing data, and don’t represent the true scientific information – can have major consequences.

The most palpable one to consumers is that many of the products they use could be removed from the marketplace, leaving them with fewer choices when searching for the one that best fits their needs.

Faulty risk assessments can also lead to the misdirection of public health resources towards “protecting” us from phantom risks rather than helping to tackle real, tangible health concerns. Instead, assessments should focus on our understanding of how chemicals may interact with the body to determine the likelihood of harm.

In addition, assessments should take into account the presence of chemicals that are produced naturally by the body.

Animal and human data, along with basic research, should be comprehensively reviewed, evaluated and integrated to provide an understanding of the potential hazards and risks that chemicals could pose to people at differing exposure levels.

That way, consumers can be more confident they are being protected from real risks, rather than risks that may not exist at all.

Nancy Beck, PhD, is the Senior Director or Regulatory Science Policy at the American Chemistry Council. This blog originially appeared on the ACC's blog, American Chemistry Matters.

More in Operations