Previous PageTable of ContentsNext Page


5. RISK ASSESSMENT OF CHEMICAL AGENTS IN FOOD

5.1 Introduction

For the purpose of this chapter, only intentionally introduced chemical agents, inadvertent contaminants and naturally occurring toxicants have been considered. This includes food additives, residues of pesticides and other agricultural chemicals, residues from veterinary drugs, chemical contaminants from any source, and natural toxins, such as mycotoxins and ciguatoxin. Microbial toxins, such as Clostridium botulinum toxin, are not included.

Risk assessment is seen primarily as a method of systematically organizing scientific and technical information, and its associated uncertainties, to answer specific questions about health risks. It requires evaluation of relevant information, and selection of the models to be used in drawing inferences from that information. Further, it requires explicit recognition of uncertainties and, when appropriate, acknowledgement that alternative interpretations of the available data may be scientifically plausible.

The steps involved in risk assessment of chemical hazards have been discussed at greater length elsewhere (NRC, 1983, 1994). Risk assessment is subject to uncertainties related to data and to the selection of the appropriate model. Uncertainties are discussed in further detail later in this report. However, it should be pointed out at this juncture that data uncertainties arise both from limitation on the amount of data available and from evaluation and interpretation of actual data obtained from epidemiological and toxicological studies. Model uncertainties arise whenever attempts are made to use data concerning the occurrence of certain phenomena obtained under one set of conditions, to make estimations or predictions about phenomena likely to occur under other sets of conditions for which data are not available.

The process of risk assessment requires adequate toxicological information preferably based on standardized testing protocols which have been accepted by the international community. In addition, a credible risk assessment requires at least a minimum data set which has been already defined by others, e.g. JECFA, JMPR, EPA, FDA, OECD.

Depending upon the chemical, empirically-based answers to toxicological questions may be available for the purpose of risk assessment. However, in no case will the scientific information be comprehensive enough to provide a high degree of certainty. When several sets of animal toxicology data are available, there are usually insufficient data to identify the set (i.e. species, strain, toxicity end-point) that best predicts human response. As a result, it has become traditional to rely on toxic responses which occur at the lowest dose in a study of acceptable quality.

Minimum data requirements for risk assessment are difficult to specify in advance. Hazard, dose-response, and exposure data bases for substances that may become subjects for risk assessment vary enormously in size, scope, and quality. In some instances, the data may be very limited and practically impossible to obtain. The latter is especially the case for contaminants and naturally occurring substances. When a risk assessment is necessary, risk assessors are required to make the best use of whatever information is available, and to deal explicitly with data uncertainties. In cases where this is not possible, risk assessors should provide the reasons for such judgements. Perhaps the appropriate option is to leave the question of minimum data requirements open to such case-by-case judgements.

Other issues related to the process of risk assessment include the use of default assumptions to fill knowledge and data gaps. This provides the advantage of ensuring consistency in approach and minimizing or eliminating case-by-case manipulations of the conduct of risk assessment to meet predetermined risk management objectives. One major disadvantage, however, is the potential for displacement of scientific judgement by rigid guidelines. One intermediate approach is to allow risk assessors to replace defaults in specific cases of chemicals for which relevant scientific data are available to support alternatives. Specific and explicit justification for any such departures should be provided.

5.2 Hazard identification

The goal of hazard identification is to identify potential adverse health effects in humans associated with exposure to a chemical, the likelihood of such effects occurring and the certainty or uncertainty associated with such effects. In this context, the hazard identification does not imply the quantitative extrapolation of risk for exposed human populations as in the dose-response and risk characterization step, but rather an evaluation of the qualitative likelihood of the effect occurring in exposed human populations.

Because data are often insufficient, hazard identification is best conducted using the weight-of-evidence approach. The approach requires an adequate and documented review of relevant scientific information obtained from appropriate databases, peer-reviewed literature and, if available, unpublished studies from other sources, such as industry. This approach places emphasis on studies in the following order: epidemiological studies, animal toxicological studies, in vitro assays and, lastly, quantitative structure-activity relationships.

5.2.1 Epidemiological studies

Where data from positive epidemiological studies are available, their use in the risk assessment process is encouraged. Data derived from human clinical studies, where they are available, should also be utilized in the hazard identification step, as well as perhaps other steps. However, clinical and epidemiological data are unlikely to be available for most chemicals. In addition, negative epidemiological data may be difficult to interpret for risk assessment purposes because the statistical power of most epidemiological studies is inadequate to detect effects at relatively low levels in human populations. Finally, although the value of epidemiological data is recognized, positive data indicate that an adverse effect has already occurred; thus, risk management decisions should not be delayed pending the development of epidemiological studies. Epidemiological studies from which data for risk assessment are derived should be based on recognized standardized protocols.

During the design of epidemiological studies, or where positive epidemiological data are available, consideration must be given to the variability in human susceptibility; genetic predisposition, age-related and gender-related susceptibility, and the impacts of factors such as socio-economic status, nutritional status, and other possibly confounding factors.

Due to the cost of epidemiological studies and due to the paucity of data such studies provide, hazard identification will ordinarily need to rely on data derived from animal and in vitro studies.

5.2.2 Animal studies

Most toxicological data for risk assessment are derived from animal studies and it is, therefore, essential that these studies be conducted following widely accepted, standardized testing protocols. While many such protocols exist, e.g. OECD, EPA, guidance is not available concerning the selection and use of specific protocols for food safety risk assessment. Regardless of which protocols are used, all studies should follow Good Laboratory Practices (GLP) and standardized quality assurance/quality control (QA/QC) procedures.

Adequate minimum data sets are generally available for food safety risk assessment and should be used. These include specification of the number of species/strains/stocks, use of more than one sex, appropriate selection of doses (see below), route of exposure, and adequate sample size. In general, the source of data (published studies, unpublished studies, corporate data, etc.) is not a point of great concern as long as studies are transparent and can be demonstrated to conform to GLP and QA/QC procedures.

Animal data from long-term (chronic) studies are critical, and should address significant toxicological effects/end-points, including cancer, reproductive/developmental effects, neurotoxic effects, immunotoxic effects, and others. Animal data from short-term (acute) toxicity studies will also be useful and should be generated. Animal studies should facilitate identification of the range of toxicological effects/end-points (including those listed). Data on the relationship between toxicity and essentiality should be gathered for those substances which are required to meet nutritional requirements, e.g. copper, zinc, and iron. Animal toxicological studies should be designed to identify a NOEL, a no-observed-adverse-effect level (NOAEL) or a benchmark dose; that is, doses should be selected to identify these end-points. Doses should also be selected at levels high enough to reduce the likelihood of false-negatives as much as possible, while considering issues such as metabolic saturation, cytogenic and mitogenic induced cell proliferation, etc. Presently, the selection of the highest dose for chronic rodent bioassays is being debated. Discussion is focused on the selection, use, and interpretation of data from studies which employ the Maximum Tolerated Dose (MTD). Mid-range doses should be selected to provide relevant information on the shape of the dose-response curve.

Animal studies should, where possible, identify not only potential adverse effects for human health but also provide information on the relevance of these effects for human risk. Information on relevance may be provided by studies that characterize the mechanism of action, the relationship between administered and delivered dose, and by pharmacokinetic and pharmacodynamic studies.

Mechanistic data may be supplemented by data from in vitro studies, such as information on genotoxicity derived from reversion assays or other similar assays. These studies should be conducted following GLP, and other widely accepted protocols. However, data from in vitro studies should not be used as the sole source of information to predict human risk.

The results of in vivo and in vitro studies can enhance the understanding of mechanisms and pharmacokinetics/dynamics. However, such information may not be available in many cases and the risk assessment process should not be delayed pending development of mechanistic and pharmacokinetic/dynamic data.

Information on administered versus delivered dose will be useful as part of the evaluation of mechanism and pharmacokinetic data. The assessment should also consider information on chemical speciation (administered dose) and metabolite toxicity (delivered dose). As part of this consideration, the issue of chemical bioavailability should be addressed (bioavailability of parent compound, metabolites, etc.) with specific consideration given to absorption across the appropriate membrane (i.e., the gut), transport to systemic circulation, and, ultimately, to the target organ.

Finally, structure-activity relationships may be useful to increase the weight-of-evidence for human health hazards identification. Where classes of compounds are of interest (e.g. polycyclic aromatic hydrocarbons, polychlorinated biphenyls and dioxins), and where adequate toxicological data are available on one or more members of the class, a toxic equivalence approach may be useful to predict the human health hazard associated with exposure to other members of the class.

5.3 Hazard characterization

The chemicals in food being considered include food additives, pesticides, veterinary drugs and contaminants. They are often present in food at low levels - typically at a part per million or less. However, to obtain adequate sensitivity, animal toxicological studies must be conducted at high levels which may exceed, depending on the intrinsic toxicity of the chemical, several thousand parts per million. The significance that the adverse effects detected in high-dose animal studies have for low-dose human exposures is the major question posed in the hazard characterization of chemicals.

5.3.1 Dose-response extrapolation

In order to be compared to human exposure levels, animal data need to be extrapolated to doses much lower than those studied. This extrapolation procedure is uncertain both qualitatively and quantitatively. The nature of the hazard may change with dose or may disappear entirely. The selected dose-response model may be incorrect if the nature of the response in animals and humans is qualitatively the same. Not only is the equivalent dose estimate in animals and humans a problem in comparative pharmacokinetics, but also is the change in metabolism with dose. The metabolism of chemicals at high and low doses may differ. For example, high doses often overwhelm normal detoxification/metabolism pathways and produce adverse effects that would not occur at lower levels. High doses can induce higher rates of enzyme production, physiological changes and dose related pathological changes. The toxicologist must consider the potential impact of these and other possible dose-related changes on the extrapolation of the adverse effect to lower doses.

5.3.2 Dose-scaling

Toxicologically equivalent doses in animals and humans is a debatable issue. JECFA and JMPR have typically used mg per kg of body weight for interspecies scaling. Recently regulatory authorities in the USA have proposed a scaling equivalent to mg per 3/4 kg of body weight, which is based on more recent pharmacokinetic information. The ideal scaling factor would be obtained by measuring tissue concentrations and clearance rates in the target organ of the animal and human; blood levels would approximate this ideal. Generic interspecies scaling factors should be recognized as default values that are used in the absence of better information, which is seldom available.

5.3.3 Genotoxic and non-genotoxic carcinogens

Traditionally, toxicologists have accepted the existence of thresholds for adverse effects with the exception of carcinogenicity. The tradition extends from the early 1940s when it became evident that the initiating event in carcinogenesis could be a somatic mutation. In theory, a few molecules, even a single molecule, could cause a mutation that could persist in the animal or human and ultimately be expressed as a tumour. Theoretically, there may be no safe dose for a carcinogen that acts through this mechanism.

In recent years it has been possible to discriminate between carcinogens and to identify a category of non-genotoxic carcinogens that are themselves not capable of producing mutations but act at later stages of the cancer process on cells already "initiated" by other carcinogens or other processes e.g. radiation. In contrast, other carcinogens induce genetic alterations in somatic cells with activation of oncogenes and/or inactivation of cancer suppressor genes. Thus, genotoxic carcinogens are defined as chemicals which can cause genetic alterations in target cells, either directly or indirectly. While the major target of genotoxic carcinogens is genetic material, non-genotoxic carcinogens act at extra-genetic sites, leading presumably to enhanced cell proliferation and/or sustained hyperfunction/dysfunction at the target sites. Regarding species differences in carcinogenic effects, a large body of data has been reported indicating that quantitative differences exist in both genotoxic carcinogens and non-genotoxic carcinogens. In addition, certain non-genotoxic carcinogens, called rodent-specific carcinogens can be raised as examples of substances for which there are qualitative differences in the ultimate carcinogenic effects. In contrast, no such clear-cut examples have been reported for genotoxic carcinogens.

Toxicologists and geneticists have devised tests to detect chemicals capable of causing mutations in DNA; the Ames test is a well known example. Several such tests, both in vitro and in vivo tests are used, typically in the form of a battery, to determine the mutagenic potential of chemicals. While the exact tests to include in such a battery may be debatable, in general these tests have been useful in distinguishing between genotoxic and non-genotoxic carcinogens.

Food safety authorities in many countries now make a distinction between genotoxic and non-genotoxic carcinogens. While this distinction cannot be applied in all instances due to insufficient information or knowledge on carcinogenesis, the concept can still contribute to the establishment of evaluation strategies for cancer risks posed by exposure to chemicals. In principle, non-genotoxic carcinogens may be regulated using a threshold approach, such as the "NOEL-safety factor" approach. In addition to the demonstration that the substance is not likely to be a genotoxic agent, scientific information is often required on the mechanism of carcinogenicity.

5.3.4 Threshold approaches

A safe level or Acceptable Daily Intake (ADI) is derived from an experimental NOEL or NOAEL by applying appropriate safety factors. The conceptual basis for their use is that thresholds will exist at reasonably comparable doses in both humans and experimental animals. For humans, however, sensitivity may be greater, genetic outbreeding may be larger and dietary habits may be more variable. As a consequence, a safety factor is applied by JECFA and JMPR to take into account these uncertainties. A safety factor of 100 is typically applied when data from long-term animal studies are available but other safety factors are used by different health agencies. JECFA also uses a larger safety factor when the data are minimal or when the ADI is assigned on a temporary basis. Other health agencies adjust the ADI for the severity or irreversibility of the effect. These differences in ADI values constitute an important risk management issue which deserves some attention by appropriate international bodies.

The message communicated with an ADI is that there is no significant risk if the chemical is ingested at or below the ADI. The safety factor, as indicated, is selected to subsume anticipated variations in human responses. It is, of course, theoretically possible that some individuals are even more sensitive than provided for by the safety factor. The safety factor approach, like the quantitative risk approach discussed below, cannot guarantee absolute safety for everyone.

Another approach to ADI development has been to move away from reliance on the NOEL/NOAEL and toward the use of a lower effective dose, such as ED 10 or ED05. This approach, called the benchmark dose, draws more heavily on data near the observed dose-response range, but is still subject to the application of safety factors. Thus, while it may allow a more accurate prediction of low dose risk, the benchmark dose-based ADI may not differ significantly from a NOEL/NOAEL-based ADI. Special population groups, like children, are protected by an appropriate choice of the intraspecies conversion factor and by special consideration of their exposures, if necessary (see 5.4 Exposure assessment).

5.3.5 Non-threshold approaches

For genotoxic carcinogens, the "NOEL-safety factor" approach is generally not considered a suitable method for setting acceptable intake levels. The consensus is predicated on the anticipated presence of risk at all doses, even the lowest. At this point, two management approaches are available: (1) to ban the chemical from commercial use, or (2) to establish a level of risk that is sufficiently small to be deemed negligible, insignificant or societally acceptable. The implementation of this latter approach has given rise to quantitative risk assessment for carcinogens.

Various extrapolation models have been utilized for this purpose. Currently models use experimental measurements of tumour incidence and dose and virtually no other biological information. None of these models have been validated beyond the experimental range. No correction for high dose toxicity, enhanced cellular proliferation, or DNA repair is made. For these reasons, the current linear models are considered to be conservative estimates of risk. This is usually expressed by characterizing the risks generated by such models as "plausible upper bounds" or "worst-case estimates". It is acknowledged by many regulatory agencies that actual or probable human risks are not being predicted. Some countries attempt to reduce the conservatism inherent in linear extrapolation by using non-linear models. An essential component of this approach is the determination of an acceptable risk level. In the USA, FDA and EPA have chosen a risk level of one in a million (10-6). This acceptable level was chosen because it was considered to represent an insignificant risk. But the choice of a risk level is ultimately a risk management decision for each country to decide.

For food additives and residues of pesticides and veterinary drugs, a fixed level of risk is practical as the substances can be disallowed if the estimated risk exceeds the regulatory acceptable level. But for contaminants, including discontinued pesticides which have become environmental contaminants, an established acceptable level can easily be exceeded. For example, in the USA, dioxins are estimated to present a worst case risk of around 10-4. For ubiquitous carcinogenic contaminants like polycyclic aromatic hydrocarbons and nitrosamines, the l0-6.risk level is also exceeded.

5.4 Exposure assessment

Estimates of dietary intakes of food additives, residues of pesticides and veterinary drugs and contaminants require information on the consumption of relevant foods and the concentrations of the chemical of interest in those foods. In general, three approaches are available in exposure assessment: (1) total diet studies; (2) selective studies of individual foods, and; (3) duplicate portion studies. Guidelines for the study of dietary intakes of chemical contaminants are available from WHO (GEMS/Food, 1985). In recent years, direct monitoring of human tissues and body fluids has been increasingly used to assess exposure. For example, the determination of levels of organochlorine compounds in breast milk, which are mainly derived from the diet, has provided an integrated assessment of human exposure to these substances (GEMS/Food, in press).

Dietary intake determinations can be relatively straight-forward for additives, pesticides and veterinary drugs as the relevant foods and their use levels are specified by their approved conditions of use. However, the actual levels of additives and residues of pesticides and veterinary drugs present in foods are often well below the maximum levels permitted. In regard to residues of pesticides and veterinary drugs, levels on or in food are often totally absent because only a portion of the crop/animal population is usually treated. Data on the levels of food additives in foodstuffs can be obtained from the manufacturers. The dietary intake of contaminants requires information on their distribution in foods that can only be obtained by analyzing representative samples of foods with sufficiently sensitive and reliable analytical methods. Guidelines for establishing or strengthening national food contamination monitoring programmes have been elaborated (GEMS/Food, 1979).

Maximum Residue Limits (MRLs) for pesticides and veterinary drugs and Maximum Levels for additives can be established from their conditions of use. In the simplest case, a food additive used at a specific level would be stable in the food until consumption. The Maximum Level would then equal the intake level. However, in many cases, the amount of the chemical of interest may change prior to consumption. For example, food additives may degrade during storage or react with the food. Pesticide residues in raw agricultural products may degrade/accumulate during further processing. The fate of veterinary drug residues in food products is influenced by metabolism, kinetics, distribution and withdrawal periods required for treated animals.

The establishment of MRLs must take into account any changes in the nature or level of the residue that may occur prior to a commodity entering commerce or that may occur under any anticipated conditions of subsequent use. Contaminants have no intended technicological effect in the food and guideline levels are usually set as low as reasonably achievable.

The theoretical total dietary intake of additives, pesticides and veterinary drugs must be below their corresponding ADIs. Frequently, the actual intake is well below the ADI. Setting guideline levels for contaminants present special problems. There is usually a paucity of data to establish a provisional tolerable intake. On occasion, the levels of the contaminants are higher than what an established provisional tolerable intake would permit. In these cases, the guideline levels are set on economic and/or technical considerations.

Reliable food intake data are essential for exposure assessments based on measuring levels of chemical agents in food. Detailed food consumption data for the average and median consumer as well as for different population groups are important for assessing exposure, particularly by sensitive groups. In addition, comparable food consumption data, particularly with respect to staple foods from different regions of the world are essential for developing an international risk assessment approach to food safety.

GEMS/Food currently maintains a database of five regional diets as well as a composite "global" diet. Daily dietary intakes of nearly 250 individual primary and semi-processed food commodities are available. The African, Asian, East Mediterranean, European and Latin American regional diets are based on selected national data from FAO Food Balance Sheets. Consumption data derived using this approach provide no information on extreme consumers. No information is available in GEMS/Food on the intake of food additives although intakes in developed countries are anticipated to be greater than in developing countries because of the higher portion of processed foods in the diet.

5.5 Risk characterization

The outcome of the risk characterization is an estimate of the likelihood of adverse health effects in human populations as a consequence of the exposure. The risk characterization is performed by taking into consideration the results of the hazard identification, hazard characterization, and exposure assessment. For threshold acting agents, population risk is characterized by comparison of the ADI (or other measures) with exposure. In this case, the likelihood of adverse health effects is notionally zero when exposure is less than the ADI. For non-threshold acting agents, population risk is the product of exposure and potency.

At the risk characterization step, the uncertainties involved in each step of the risk assessment process should be described. Uncertainty in risk characterization will reflect the uncertainties in the preceding steps. The extrapolation of results of animal studies to the human situation may produce two types of uncertainties: (i) uncertainties with respect to the relevance of the experimental findings to the humans. For example, forestomach tumours in rats fed butylated hydroxyanisole (BHA) and neurotoxic effects in mice produced by aspartame may not have human parallels; and, (ii) uncertainties with respect to specific human sensitivity for effects of a chemical that cannot be studied in experimental animals. In this case, hypersensitivity to glutamate is an example. In practice, these uncertainties are dealt with by expert judgement and by additional studies, preferably in humans. These studies may be performed during the pre-marketing phase as well as during the post-marketing phase.


Previous Page Top of Page Next Page