Previous Page Table of Contents Next Page


Risk Assessment of Chemicals at Low Levels - New Concepts

Kevin J. Greenlees[40] and Barry Hooberman[40],
Rockville, Maryland, USA

Introduction and definition of the problem

The ever-advancing march of technology has impacted the veterinary drug regulatory landscape in two critical areas. First, the increasing focus in designing veterinary drugs is to elicit specific effects in physiological pathways. The results are very potent compounds that exert physiological and pharmacological actions through receptor pathways at very low doses. The consequences are very low acceptable daily intake (ADI) (or reference dose (RfD)) values for human dietary consumption. The beta-adrenergic agonists are an example - the recent JECFA review of the beta-adrenergic agonist ractopamine HCl resulted in a recommendation of an ADI of 0 to 1 microgram per kg bw per day and MRL of 10 microgram per kg of muscle with higher values in liver and kidney (ftp://ftp.fao.org/es/esn/jecfa/jecfa62summary.pdf). These are values that were pushing the limits of sensitivity a few years ago but now are well within the range of analytical techniques. At the same time, the increasing sensitivity of analytical methods has become another of the current factors driving concern for very low levels of residues for which no ADI or MRL have been set. Modern analytical tools are increasingly able to establish methodologies that allow the determination of a maximum residue level (MRL), even at these very low levels. It is becoming increasingly impractical to set "no residue" levels based on an inability to establish the ADI as the improved technology reveals the presence of residues that were formerly undetectable. Improvements in analytical methodologies have, for example, resulted in recent concerns for residues of chloramphenicol at levels below 1 microgram per kg of tissue (1 ppb) (Commission Decision 2004/25/EC). It is important to note that this change in analytical method sensitivity has no impact on the inherent toxicity of the residues in edible tissues or the safety of these residues. It does, however, have considerable impact on the perception of safety.

Existing approaches to the food safety of veterinary residues use traditional toxicological tools to describe the hazard and potential adverse outcome. This is accomplished through a series of standardized toxicological studies, the determination of a no observable effect level (NOEL) or no observable adverse effect level (NOAEL), and the application of appropriate safety or uncertainty factors (WHO, 1987). Exposure is generally described through standard residue evaluation studies that determine the nature and extent of residues in the edible tissue of treated animals and the application of a series of assumptions. These assumptions include that (1) all animals are treated at the maximum approved dose and duration, (2) that the edible tissue contains the maximum allowable residue concentration of the drug, (3) that humans consume the entire assumed fraction of the total diet for that tissue, and (4) that the residues are consumed daily for a lifetime.

One result of the traditional empirical approach to a safety standard is that it becomes very difficult to change the kinds of studies, the assumptions, upon which the safety standard is based. It is very much like a house of cards, where each successive step is predicated on the models and assumptions applied to the previous step. This becomes particularly true when changes are made at multiple levels. How should the scientifically valid data of a negative "Big Blue" assay for cancer be integrated into the determination of an ADI? If it is determined that the human consumer is only rarely exposed to the drug residue, how should this impact an ADI otherwise based on the assumption of daily exposure for a life-time? In the context of the current meeting, how can safety be assessed when the data are insufficient to support the calculation of an ADI? The existing approach to safety, while very robust when intact, is also relatively inflexible in the face of changes to the basic assumptions.

Traditional approaches

Determining safety - an empirical approach. The traditional toxicology study results are based entirely on the observed results, with no extrapolation beyond the observed data. The NOEL or NOAEL is a dose actually administered to the toxicological species during the toxicity study. Standardized safety or uncertainty factors are applied to address individual variability, animal to human extrapolation, and other areas of uncertainty. Safety to the human consumer is empirically defined based on the standard battery of tests, safety factors and assumptions. Within this empirical definition, safety is bounded by the definitions of acceptable daily intake[41] (WHO, 1987) and acceptable residue levels expressed by the maximum residue limit[42] (WHO, 1988). An empirical definition of safety such as this generally refers to a qualitative safety standard such as simply "safe" or meeting a standard such as "reasonably certain of no harm" set by US regulatory agencies (FFDCA, 2003). A quantified level of risk is not generally associated with this standard for safety.

New modifications on the traditional approach

Toxicity testing - Methodology/incorporation of mechanism of action

New advances in toxicology can be incorporated into an empirical evaluation of safety as long as they remain within the bounds of the basic assumptions. The local lymph node assay was recently accepted as a substitute for the traditional guinea pig assay in testing for allergenic response. This assay reduces and refines animal use, yet permits the development of dose-response data (ICCVAM, 2003). An alternative test such as this lends itself very well to the traditional empirical approach to safety and establishment of an ADI. The only adjustment might be determining the appropriate safety factor (to address animal model to human extrapolation) to be used in establishing the ADI. Decisions of the appropriate safety factor to be used in a regulatory environment require interaction between the scientific risk assessors and the risk managers.

The Big Blue Mouse is a transgenic mouse model that has been used to investigate chemicals for genetic toxicity and for carcinogenicity. The TSG-p53/Big Blue strain has the LacI transgene integrated into every cell. In addition, this model is hemizygous for knockout of the endogenous Trp53 gene. (http://www.taconic.com/anmodels/P53BB-TT.htm). When used as a model for genetic toxicity, the information derived from this animal model can be used in the same way as more traditional models such as the Ames Salmonella reverse mutation assay and the rat bone-marrow micronucleus assay - as predictors of the chemicals potential to cause a deleterious effect on the genetic material. When used as a carcinogenicity assay, the model can provide useful information into the mechanism of carcinogenicity. Such information may be relevant to the mechanism of action or the relevance to human carcinogenesis (NTP, 2001).

New modifications on empirical approach - Improved data analysis

The use of mathematical and statistical approaches for hazard characterization are proving to be increasingly useful. A recent review of the available approaches for hazard characterization notes that no single method is suitable for all assessments (Edler, et al., 2002). One approach that has gained favor in recent years is the use of statistical estimates based on all of the available data in a dose-response series to determine the point of departure from background level response through the use of the benchmark dose (BMD) as a statistical estimate of the NOEL. (Crump, 1981; Gaylor, et al., 1998; Edler, et al., 2002).

Figure 1. Benchmark Dose vs Benchmark Lower Bound vs NOEL

The benchmark dose is based on modeling all of the available dose response data and generally estimates an excess risk of 10% over background for the response of interest. While different levels of excess risk may be used, this level of risk is presumed to be at or near the limit of sensitivity for most cancer, and some non-cancer, bioassays. The benchmark lower limit (BMDL) attempts to increase the confidence of the estimate by taking into account the variability of the data. The BMDL typically uses the lower bound of a 95% confidence limit on the benchmark dose (Gaylor and Aylward, 2004).

The BMD/BMDL approach is generally used for risk assessments presented in the USEPA IRIS database (Edler, 2002; IRIS, 2004) for non-cancer endpoints in the calculation of a reference dose[43]. A significant advantage of the BMD/BMDL approach is the ability to use the data gathered for the dose-response of the veterinary drug (Gaylor, et al, 1998; Gephart, et al, 2001). Evaluation of this approach has identified the need for increased interaction between the risk assessor and risk manager to assure proper application of the technique (Gephart, et al., 2001).

The BMD/BMDL approach is just one of a number of approaches that have been developed to refine the ability to characterize the toxicological hazard of a veterinary drug (Edler, et al, 2002).

Improved understanding of the mechanisms of toxicity, the use of biologically based models, and other tools may help to extend the dose response curve to lower levels of exposure and help address low level veterinary residues. Other approaches such as sensitive biomarkers, biochemical changes in response to chemical exposure, are being developed (Anderson and Barton, 1998). Changes to biomarkers may occur at exposures well below those required to cause changes observed in traditional toxicological studies. As a result, the use of biomarkers may be particularly useful in characterizing response to very low residues of veterinary drugs. The implementation of these approaches within the existing traditional empirical approach to the safety of veterinary drug residues in food requires close interaction between the risk manager and the risk assessor to assure that the strengths and weakness of the approach, and any impact on the assumptions underlying the conclusion of safety, are clearly understood.

Alternate risk-based approaches

Moving from an empirical definition of safety. It is becoming increasingly possible to evaluate the acceptable risk based on human exposure to the ADI established by the traditional safety testing paradigm. For example, the BMD/BMDL approach has been shown to result in values that are not very different from the traditional NOEL/NOAEL approach (Edler, 2002). Recently, attempts have been made to use the BMD and BMDL to estimate the risk to the exposed human population based on a traditionally established reference dose calculated from a NOAEL (Gaylor and Kodell, 2002). These authors show that a reference dose (generally equivalent to an ADI used for dietary exposure), and based on an uncertainty factor of 10 to account for human variability and a NOAEL for human reproductive toxicity has an increased risk over background between 1 in 600,000 and 1 in 20,000, depending on the toxicological endpoint. Approaches such as these help to define the level of risk currently found to be acceptable when determining the safety of veterinary drug residues.

Determining the dietary exposure resulting in a certain level of risk. A different approach to the data is used by the USFDA for carcinogenic residues of veterinary drugs. Dose response data are collected on tumor development using the traditional chronic rodent cancer bioassay and a linear extrapolation is made to the dose resulting in a one-in-a-million risk of increased tumor development in the test animal. This one-in-a-million risk value can be used much like an ADI in establishing safety for human consumption of animals or animal products containing the residue (Gaylor, et al., 1997; Edler, 2002). Similar approaches are also possible for non-cancer endpoints if coupled with a BMD or BMDL. It has been shown that it is possible to establish the desired reference dose (or ADI) for a given level of risk - for example, 1 in 10,000 or 1 in 100,000 (Gaylor and Kodell, 2002). Safety to the human consumer is now defined by a certain risk value to the laboratory animal species as derived from a standardized animal model. Human risk is not identified, although it is thought to be at least as low as the risk to the animal. Consistent with this approach is the concept of thresholds of toxicological concern (Gaylor, et al., 1997; Cheeseman, et al., 1999; Kroes and Kozianowsk, 2002; Renwick, 2004). Based on structural information on the chemical of concern, combinations of metabolism and toxicity data from compounds within a structural class, and information about exposure levels, the approach determines safety based on whether a given substance is above or below a threshold dietary intake. The approach is used by the JECFA and by the US Food and Drug Administration for some food additives. The approach has considerable advantage if applied to low level veterinary residues in the conservation of resources that need not be expended to evaluate compounds that fall below the threshold of concern. The approach could establish clear thresholds for dietary exposure below which the veterinary residues would be considered to be safe. However, establishing this approach requires considerable scientific data and considerable interaction between the risk assessor and risk manager, and acceptance by the affected public.

Determining the risk at a level of dietary exposure. Unlike the threshold of toxicological concern discussed above, it is possible to determine the actual toxicological risk posed by a given dietary exposure to a veterinary drug residue, and then determine whether the risk is sufficiently low that the consumption of the residue may be considered to be safe. Based on data on inter-individual effects, it is possible to estimate a risk value for reference doses derived from a bench mark dose (Gaylor and Kodell, 2002). The approach as proposed still requires uncertainty factors to address uncertainties associated with extrapolation from the animal model to humans. Further refinement may be possible with better knowledge of comparative pharmacokinetics and toxicokinetics and tools such as physiologically based pharmacokinetic models. For non-mutagenic carcinogens, various approaches are proposed to estimate the actual risk of the environmental concentrations of the chemical to the exposed population (Edler, 2002; USEPA, 2003). The approach requires an understanding of the possible mechanism(s) of carcinogenesis, metabolism, and likely characteristics of the dose-response relationship at low doses.

These approaches allow the risk assessor to quantitatively establish the toxicological risk of the veterinary drug residue in the human diet. Overall risk is typically shown to be a function of the hazard or adverse outcome, and of the exposure to that hazard or adverse outcome. It becomes possible to provide a complete risk estimate for the veterinary drug residues in food as a function of the likelihood the hazard or adverse outcome, and the likelihood of human dietary exposure. Safety is now no longer a result of the traditional testing paradigm, but the result of a risk management determination based on the quantified level of risk identified in the risk assessment. The risk manager determines whether the risk is acceptable and communicates the decision, and its basis, to the consumer.

Risk assessment and risk management

Improved risk assessment tools, for the assessment of toxicological hazard or for the analysis of residues in tissue are only one part of the necessary equation. It is even more important that efforts be made to determine what to actually do with the results of these new tools. How the results of risk assessment are applied is the purview of the risk manager.

The role of the risk manager has also become increasingly complex. The risk manager is expected to consider all of the available scientific information presented in the risk assessment and incorporate a myriad of other factors in determining the acceptability of the proposed risk (Omenn, et al, 1997). The Codex Alimentarius serves a risk management role internationally, frequently relying on the JECFA committees to provide the risk assessment. As the safety assessment for the residues of veterinary drugs moves from an empirical standard of safety to a quantitative estimate of the toxicological risk, there is an imperative for increased communication between the risk manager and risk assessor at all stages of the process. The risk manager must define the problem and put it into context for the risk assessor. Boundaries on the assessment such as determining the appropriate population at risk requires input from both the risk manager and risk assessor. A close liaison is needed between the risk manager and assessor to develop the appropriate assumptions for the risk assessment and to assure that the results of the risk assessment will address the problem as identified by the risk manager. The risk assessor, in turn, must clearly communicate the hazards, risks, and associated uncertainties to the risk manager.

In addition, there is a need to consider genuine differences in national and regional perceptions of risk and acceptability of risk. Some differences are based on consumption patterns - a compound that is extensively excreted in milk may be seen as of little risk to a population that consumes little dairy or dairy products, yet may not be acceptable in a region of heavy dairy consumption. Other differences may be based on social or economic considerations, all of which should be considered by the risk manager. Yet, without careful efforts in international harmonization, these differences are likely to have significant impact on global trade.

References

Anderson, ME, and HA Barton. 1998. The use of biochemical and molecular parameters to estimate dose-response relationships at low levels of exposure. Environ. Health Perspect. 106(Suppl 1):349-355.

Barnes, DG. 1988. Reference dose (RfD): Description and use in health risk assessments. Reg. Toxicol. and Pharmacol. 8:471-486.

Cheeseman, MA, EJ Machuga, and AB Bailey. 1999. A tiered approach to threshold of regulation. Food and Chem. Toxicol. 37:387-412.

Commission Decision 2004/25/EC. Official Journal of the European Union. L 6/38 10.1.2004.

Crump, KS. 1984. A new method for determining allowable daily intakes. Fund. Appl. Toxicol. 4:854-871.

Edler, L, K Poirier, M Dourson, J Kleiner, B mileson, H Nordmann, A Renwick, W Slob, K Walton, and G Würtzen. 2002. Food and Chem. Toxicol. 40:283-326

Environmental Health Criteria 70, WHO, Geneva 1987.

FFDCA. 2003. US Federal Food Drug and Cosmetic Act. 21 USC 301, et seq

Food Additive Organization of the United Nations and World Health Organiazation. 2004. Summary evaluations performed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA 1956-2003). Internet Edition. ILSI Press, Washington, DC. http://jecfa.ilsi.org.

Gaylor, DW and LL Aylward. 2004. An evaluation of benchmark dose methodology for non-cancer continuous-data health effects in animals due to exposure to dioxin (TCDD). Reg. Toxicol. And Pharmacol. (in press).

Gaylor DW and RL Kodell. 2002. A procedure for developing risk-based reference doses. Reg. Toxicol. and Pharmacol. 35:137-141.

Gaylor, D, L Ryan, D Krewski, and Y Zhu. 1998. Procedures for calculating benchmark doses for health risk assessment. Reg. Toxicol. and Pharmacol. 28:150-164

Gaylor, DW, JA Axelrad, RP Brown, JA Cavagnaro, WH Cyr, KL Hulebak, RJ Lorentzen, MA Miller, LT Mulligan, and BA Schwetz. 1997. Health risk assessment practices in the U.S. Food and Drug Administration. Reg. Toxicol. and Pharmacol. 26:307-321.

Gephart, LA, WF Salminen, MJ Nicolich, and M Pelekis. 2001. Evaluation of subchronic toxicity data using the benchmark dose approach. Reg. Toxicol. and Pharmacol. 33:37-59.

ICCVAM Biennial Progress Report. 2003. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). NIEHS.

IRIS. US Environmental Protection Agency Integrated Risk Information System. 2004. http://www.epa.gov/iris/

Kroes, R and G Kozianowski. 2002. Threshold of toxicological concern (TTC) in food safety assessment. Toxicology Letters 127:43-46.

NTP 2001. Current Directions and Evolving Strategies. National Toxicology Program. US Department of Health and Human Services.

Omenn, GS, Kessler, AC, Anderson, NT, Chieu, PY, Doull, J, Goldstein, B, Lederberg, J, McGuire, SM, Rall, D, Weldon, VV, and Charnley, G. 1997. Risk assessment and risk management in regulatory decision-making. The Presidential/Congressional Commission on Risk Assessment and Risk Management. Final Report. Volume 2.

Renwick, AG, 2004. Toxicological databases and the concept of thresholds of toxicological concern as used by the JECFA for the safety evaluation of flavouring agents. Toxicology Letters 149:223-224.

US Environmental Protection Agency (USEPA). 2003. Draft final guidelines for carcinogen risk assessment. EPA/630/P-03/001A.

World Health Organization Technical Report Series 763, WHO, Geneva 1988.


[40] The opinions and information in this article are those of the authors, and do not represent the views and/or policies of the U.S. Food and Drug Administration.
[41] ADI (Acceptable Daily Intake): An estimate of the amount of a substance in food or drinking-water, expressed on a body-weight basis, that can be ingested daily over a lifetime without appreciable risk (standard human = 60 kg). The ADI is listed in units of mg per kg of body weight (FAO/WHO, 2004).
[42] MRL (Maximum Residue Limit: The maximum concentration of residue resulting from the use of a veterinary drug (expressed in mg/kg or mg/kg on a fresh weight basis) that is acceptable in or on a food. It is based on the type and amount of residue considered to be without toxicological hazard for human health as expressed by the Acceptable Daily Intake (ADI), or on the basis of a temporary ADI that utilizes an additional safety factor. It also takes into account other relevant public health risks as well as food technological aspects and estimated food intakes (FAO/WHO, 2004).
[43] A reference dose (RfD) is defined as the NOAEL divided by an uncertainty factor (typically 100) and modification factor. The RfD, while similar to an ADI, while assuming that doses below the RfD are not likely to pose a hazard, explicitly makes no assumptions regarding the absolute safety of concentrations above and below the RfD, and is considered more appropriate than the ADI for evaluations safety to a population (Barnes, 1988).

Previous Page Top of Page Next Page