Previous Page Table of Contents Next Page


7. From Vision to Reality


The vision for TCO is to contribute to the integrated understanding and human management of the global carbon cycle, through systematic, long-term monitoring of the terrestrial exchanges of greenhouse gases, especially CO2, and the associated changes in carbon stocks. To achieve this vision, a monitoring system is required which synthesises information from several types of measurements: monitoring of atmospheric CO2 and other gases, observations of surface fluxes, ecological in situ measurements, and remote sensing. For brevity, such a system is referred to as ‘TCOs’ below, without pre-judging the form TCO implementation may take. The combined monitoring system will provide estimates of terrestrial CO2 sources and sinks at spatial and temporal scales from global to those relevant to land use policy and land management. These estimates should be provided with greatly reduced uncertainty relative to current practice, by systematic cross-checking of independent approaches and by designed expansions of current measurement networks.

Many of the elements of an integrated terrestrial carbon observing strategy are in place now or under development. The challenges are to ensure that important existing observations continue and key new observations are initiated; to build in appropriate overlaps and leverage among the disparate data sets, thus filling important data gaps; to identify activities and agencies willing to contribute to establishing TCOs; to design and implement linkages among components, activities and contributions; and to carry out the necessary research and modelling work that links observations of various types at different scales.

In this section, implementation and way forward issues are discussed. First, the progression to a functioning TCOs is briefly sketched out, with a preliminary list of important tasks listed. Specific issues are then discussed in two areas, a) further development of the dual constraint concept and b) data and information handling needs and capabilities for TCOs.

7.1 Implementation Tasks

TCOs makes measurements and generates products in order to determine and monitor the terrestrial sources and sinks of carbon dioxide and their relationship to land use and land cover. It involves:

TCOs also depend upon products from the existing observing and analysis systems for weather prediction and hydrology; on national records of land use, forests and agricultural productivity; and on other types and sources of data. In particular, there is the need to ensure availability of forest and vegetation inventory data and other in situ observations of carbon stocks and fluxes that have been obtained for various purposes by national inventory agencies, regional networks, or research programmes. Continuing access to these data types and the appropriate institutional mechanisms will represent a significant challenge for TCOs.

In common with other observing systems designed according to the Integrated Global Observing Strategy, TCOs should be:

TCOs should be implemented within the concept of a rolling review. This concept was originally developed for the observing system used for weather prediction, but is more generally applicable for other long-term environmental measurements as it entails:

a) A periodic cycle of:

b) Supported by ongoing programmes to:

Such rolling review will facilitate making the best use of current assets (observing facilities, new research findings, technological developments, plans and programmes under development) to meet the near- and mid-term future needs.

The participating networks are initially comprised of volunteer organizations with dedicated communications and a coordination/data assembly centre. To assure continuity of the data records over many decades, these need to transition from volunteer status to ongoing commitment as their track record and national funding permit. Wherever possible, network sites should be utilised in research campaigns to assist understanding of site environmental characteristics and its regional - global significance.

A potential scenario for TCOs evolution is as follows:

Planning Phase (2000-2001): Towards Implementation

(a) Identify and consult with existing potential components

(b) Consult with potential users of anticipated products to define more precisely their needs and benefits. Review existing statements of requirements in the light of this information:

(c) Review existing globally relevant products

(d) Identify most serious gaps/deficiencies

(e) Design the initial observing system as an integrated whole

(f) Identify and secure the participation of components of the Initial TCOs

(g) Needed developments for the next phase

(h) Strategic actions needed for sustainability

Pilot Phase (2002-2006): Initial Operations

7.2 Dual constraint methodology research and development

While top-down methods for the estimation of area-averaged fluxes from atmospheric data will always be limited by the spatial density of sampling, it is possible to consider different observation strategies to achieve denser sampling. Particularly useful is a distinction between (i) campaign-style measurements that are part of integrated field programmes intended to develop and evaluate bottom-up scaling methods, and (ii) augmentation of the global atmospheric observing network intended to improve global carbon flux inversions. Early implementation should focus on the development of methods for estimating spatially-integrated fluxes at moderate scales and should include model development and observing system simulations, targeted measurement campaigns, and enhancement of existing observational networks. These experiments can be used to guide the further development and later deployment of observing system components. Such evolution can be considered over three periods: between now and 2005, between 2005 and 2010, and after 2010.

Near-term priorities (2000-2005)

Spatial scaling from point measurements to area-average fluxes is already the focus of a number of initiatives being carried out in several areas of the world (e.g. CarboEurope, Australian carbon cycle programme, LBA-Ecology; see also Appendix III); these and similar studies should be encouraged and augmented. Intensive field studies addressing spatial heterogeneity and scaling of flux estimates are underway as part of EOS and other satellite programmes. Tower flux measurements are accompanied by ancillary data on carbon pools and fluxes in the surrounding region, using spatial statistics to obtain estimates of representative conditions (section 5.2). Fluxes measured from multiple heights on very tall transmission towers (Bakwin et al., 1998) can also be used to directly evaluate scaling algorithms across heterogeneous landscapes. Such measurements need to be accompanied by careful analyses of the flux footprint at each height under various meteorological conditions (see also Appendix III). The above studies are essential to the development and evaluation of the satellite and model based “gridded” global carbon flux estimates.

Flux towers have now been deployed across a large range of climatic and ecosystem conditions, though there are still some conspicuously undersampled parts of climate space (section 6.1). These data are crucial for the development and evaluation of models of the “fast” ecosystem fluxes, but carbon sources and sinks on time scales of ³1 year are likely driven more by slow processes such as changes in nutrient loading, disturbance/recovery dynamics, and land-use history. Assuming that the number of eddy covariance studies will most likely continue to grow over the next five years, a rational design for the flux network should also include sampling across gradients in these “slow” processes controlling the carbon balance of terrestrial ecosystems (section 6.1).

Scaling studies using bottom-up methods can be directly evaluated using atmospheric trace gas measurements collected in “campaign” mode (as opposed to ongoing sampling) by several approaches. Such campaigns (e.g. Gerbig et al., Appendix III) are expensive and so can only be mounted selectively, but can be very valuable if paired with other ground-based data collected by integrated field programmes:

Regional aircraft campaigns may not be cost-effective by themselves in terms of added information about regional fluxes and processes. However, they can add powerful constraints to existing experiments that include tower flux measurements, characterization of landscape-scale variability in carbon fluxes and pools in vegetation and soils, imagery collected at multiple spatial scales, and models. Such nested experiments can become incubators for credible methodologies to be applied at high resolution at larger scales. As methods are developed for quantitative estimation of area-averaged fluxes from atmospheric data, they can be tested against archived data from major field experiments such as FIFE, BOREAS, HAPEX, and EuroSiberia. The retrospective analyses can be accompanied by “pseudodata” experiments in which simulation models are used to construct realistic tracer fields consistent with known surface flux patterns. Sampling strategies can then be quantitatively evaluated and errors can be analysed.

Global-to-regional downscaling

The current atmospheric observation (flask sampling) network used in global inversion studies is insufficient to resolve regional fluxes at scales smaller than a continent or an ocean basin. Sampling sites are nearly all located in the remote marine boundary layer to obtain representative “background” data with a minimum of local influences. Sampling over continental locations would provide a more powerful constraint on model inversions, but the interpretation of continental data is fraught with problems involving strong spatial and temporal heterogeneity. Samples collected from airborne platforms during convective conditions in the mixed layer or in the free troposphere would alleviate many of these problems. The goal should be to sample air under conditions that can reasonably be represented in a global transport model used for the inversions. This precludes the terrestrial surface layer in most areas, necessitating sampling from very tall towers, balloons, or aircraft.

Before an airborne sampling network to support better inverse modelling can be deployed, thorough network optimization analyses will need to be carried out. Such observing system simulation studies have been carried out already with respect to point sampling at the surface (Rayner et al., 1996) and with limited extension to free tropospheric sampling (Gloor et al., 2000). Further analyses will include existing atmospheric data such as those collected by regional aircraft sampling studies in Europe, Australia, Siberia, and Brazil and from tall transmission towers in the USA and Europe (see also Gerbig et al., Appendix III). Network optimization studies must include careful analysis of the uncertainty of the measurements themselves. They should quantify the trade-offs between uncertainty in the estimated fluxes and network density, measurement error, and cost. Multiple tracers should be measured (e.g. CO2, CO, CH4, d13C, d18O, and O2/N2) to improve the accuracy of the inversion process. Observing system simulations will need to evaluate the errors in the estimated fluxes given errors in each of the tracers. The optimization analyses should help answer questions such as: what to measure, to what precision, where to measure, how high to fly, how often to fly, and how the uncertainty depends on errors in the model transport. These simulations will be a major goal of the integrated observing strategy in the next five years.

A modest enhancement of the existing tropospheric observing network over the next several years would pay off substantially in terms of more robust inversions of regional fluxes at higher spatial resolution than is possible given the current data. Current transport models agree on tracer distributions in the remote marine boundary layer, but diverge strongly over the continents and aloft (Law et al., 1996; Denning et al., 1999). A handful of vertical profiles over the continents could remedy this situation at moderate cost. Inexpensive rental aircraft (at perhaps US$100/hour) could sample as many as 20 shallow vertical profiles distributed around the continents weekly for about US$100K per year in direct aircraft costs (equipment, analytical, and personnel costs would also be incurred). The atmospheric constraint on tropical fluxes is particularly weak because of rapid convective and meridional mixing with respect to the current stations. Such an augmentation of the existing network, especially combined with an intensive campaign-style sampling in some areas as mentioned above, would certainly lead to significantly reduced uncertainty and improved spatial resolution in flux estimates from global inverse modelling.

Another potential source of significant new information about atmospheric CO2 concentrations over the continents that could be obtained at modest cost in the near future is direct measurements at eddy flux towers. These sites already measure CO2 (typically every 0.1 seconds) for flux determinations. Since only the variations of the concentration through time is needed, the current measurements are not calibrated with standards traceable to the WMO (and thereby to the rest of the flask network), nor is the typical instrumentation installed at the towers adequate for highly precise CO2 determinations. At an upgrade cost of less than US$50K per tower and about US$10K per year per tower in operating expenses, existing flux towers could be instrumented for CO2 determination with accuracy comparable to the global flask measurements. For use in global inverse calculations, regionally representative values would have to be extrapolated from these data, treating ordinary flux towers as “virtual tall towers.” Surface layer similarity theory can be used to extrapolate mid-CBL concentrations to high precision from accurate surface values, given simultaneous measurements of heat and momentum flux as are commonly made at flux towers. Alternatively, background concentrations can be estimated from surface time series after correction for local effects using other tracers and momentum flux (Potosnak et al., 1999). These measurements would likely have a major impact on global CO2 inversions, but are beyond the current scope of work proposed or funded at most flux towers. Access to these observations will therefore require institutional support beyond that already committed to funding the flux measurements themselves.

Two orbital sensors (NASA AIRS and ESA IASI) will be flying by 2002 and will allow direct retrieval of atmospheric CO2 from space. The accuracy of these estimates is unknown at this time, but even rough measurements of CO2 at thousands of locations every day are potentially very powerful for estimating fluxes by atmospheric inversion. The most important question involves the weighting function of these measurements. The majority of the information about surface fluxes is contained in the spatial variability in CO2 in the lowest couple of kilometres of the atmosphere where boundary-layer turbulence mixes the signal. Above 500 mb, the spatial gradients are so weak as to be beyond detection with an instrument that can only resolve concentration to 2 ppm. Thus, both accuracy and the ability to penetrate to CBL will determine the utility of spaceborne CO2 sensors. Observing system simulations will be essential to make the most of these new data.

A final priority in the near term will be to invest in the development of new technologies for inexpensive and accurate determination of CO2 concentrations and fluxes over continental regions. Promising technologies for which moderate investment might yield major improvements in the next five years include:

Mid-term priorities (2006-2010)

Following a thorough analysis of observing system simulations and network optimization, a sampling network for tropospheric CO2 above the continents should be deployed operationally. Tans et al. (1996) envisioned such a network over North America, but global inversions would benefit from global observations. Of course, higher data density in heavily sampled regions of upscaling campaigns will enhance the performance of the network. Sampling should include multiple tracers for additional constraint on the locations and mechanisms responsible for the terrestrial sources and sinks. Platforms might include tethered balloons, light aircraft, commercial aircraft, and virtual tall towers. Further development of new technologies such as solid-state and lidar remote sensing of CO2 and other tracers is also expected to lead to a major expansion in the sampling of the atmosphere in the mid-term.

With an increasing data density, the limiting factor for accuracy and resolution of the fluxes by inversion of concentrations will become the transport models used. Model development will be required to correctly account for realistic transport of trace gases globally and at high spatial resolution. Many of the current generation of models used for inversions have unrealistic wind fields, unresolved subgrid-scale transport by convection, or both (http://dendrus.atmos.colostate.edu/transcom/). High resolution inverse models will require consistent data on trace gas concentrations, winds, and the convective mass flux. This can be accomplished by identifying the inversion of CO2 fluxes as an important objective of the global weather assimilation system. Operational centres have the most accurate description of the four-dimensional variations of transport available, yet much of the subgrid-scale mass flux is not archived. The future higher data density of trace gas concentrations will require archival of full four-dimensional transport by operational assimilation and forecasting centres, through direct assimilation of trace gas concentrations in the operational models, or both. It is unlikely that the full range of data could be assimilated in real time because of the delay required to analyse some species in the laboratory, so there will probably always be a role for archival analyses in CO2 inversions. The reality is that the data volume is much larger when one includes the subgrid-scale parameterised transports. So, provisions will have to be made for these to be saved.

As data and models improve, a meaningful “dual constraint” will be formalized for consistent, global-scale top-down and bottom-up estimates of the spatial and temporal variability of terrestrial carbon sources and sinks. This constraint will include quantifying the uncertainties associated with the satellite and model based estimates, and will lead in this period to direct evaluation of EOS and other carbon products at regional scales in some parts of the world. Such effort will require the involvement of the operational meteorological infrastructure (WMO and its members), and will be a step toward true Earth system data assimilation.

Long-term priorities (after 2010)

By 2010, technological developments will likely lead to lightweight and inexpensive CO2 sensors that can be flown operationally from radiosondes, producing concentration profiles at hundreds of locations several times daily along with winds, temperature, and humidity. These data, along with in situ measurements, remote sensing of trace gases, and measurements from aircraft, will be assimilated using 4-dimensional variational data analysis or other operational methods to produce high-resolution gridded fields of trace gases consistent with other meteorological information. Such operational methods will integrate the top-down and bottom-up approaches in a coherent analysis and prediction scheme, constrained by the satellite and in situ observations.

By this time, it is possible to envision the development of a dedicated spaceborne instrument capable of measuring CO2 in the lower troposphere to an accuracy approaching 1 ppmv. This would be extremely valuable, as it would allow the use of high-time frequency variations as well as spatial patterns in inverse calculations. Preliminary studies using idealised scenarios suggest that such measurements would allow regional fluxes to be estimated to within 0.1 GtC/yr.

Given high resolution atmospheric data and improved transport models and the continued development of eddy correlation flux as well as other in situ data, we will be able to produce near-real-time descriptions of the terrestrial components of the global carbon cycle that are simultaneously consistent with all of these constraints.

7.3 Data and Information System Considerations

Achieving the TCO goals described above will require a data and information system (DIS) that both builds on the internationally distributed data resources and develops the capacity to generate new products. Given the limited resources initially available, such DIS will depend on existing components. Its design and implementation will evolve along with TCO plans; however, some of the important components and potential approaches are described below:

TCO Web Page

Similar to most international projects, a web page will be the cornerstone of communicating information within TCO and to outside interests. TCOs can take advantage of existing systems to provide the capability to catalogue existing datasets and the links to retrieve the data. Two such systems exist that are relevant to TCO: the Terrestrial Ecosystem Monitoring Sites (TEMS) metadatabase (http://www.fao.org/gtos/PAGES/TEMS.HTM), and Mercury (http://mercury.ornl.gov/).

TEMS is an international directory of metadata about monitoring stations and their activities; it is not a compilation of raw data. The objective of the database is to document existing long-term monitoring sites which may be suitable for inclusion in the GTOS network once this is established. In addition, the database will provide information on who is doing what and where in ecosystem monitoring.

Another example is Mercury, a web-based system that allows the searching of distributed metadata files to identify data sets of interest and deliver those data sets directly to the user. Mercury is designed to support the data and information needs of projects where the critical aspects are: 1) quick exchange of data between researchers; 2) complete control of data visibility in the system maintained by researchers; 3) rapid and economic deployment; and 4) high automation and easy scalability. Data providers need not run any database software locally, and their data can reside in any convenient format. At selected intervals, Mercury automatically builds a metadata index (used to provide the search capabilities) at the central data facility. Mercury is now operational at the Oak Ridge National Laboratory DAAC and harvesting environmental data from over 1,000 data sources in twelve countries. It is also being used by IGBP-DIS.

Database

To perform the integration and synthesis of carbon data, there is a need for one or more facilities that compile selected data from a variety of sources in a multitude of formats. This task will deal with issues of heterogeneity of spatial and temporal scales, different units of measure, data documentation, and general data consistency. Such problems were addressed in the development of FLUXNET (http://www-eosdis.ornl.gov/FLUXNET/) and CLIMDB.

FLUXNET provides researchers access to consistent and integrated measurements of carbon dioxide, water vapour, and energy fluxes and associated site vegetation, edaphic, hydrologic, and meteorological characteristics. Fluxes and ancillary information are unified into consistent, quality assured, documented, readily accessible datasets via the World Wide Web (http://www-eosdis.ornl.gov/FLUXNET/). FLUXNET is a “partnership of partnerships”, formed by linking existing sites and networks. Measurements and terminology from existing but disparate sites and networks are brought together into a common framework and harmonized, thereby increasing substantially the usage and value of the flux data and information for the global change research community.

CLIMDB is links LTER Network climate data together from individual site information systems into a centralised system. In the LTER Network, individual sites routinely collect daily climate data and maintain the data in local computer systems using a variety of formats. Each site provides access to standardized daily climate files via an Internet address that points to the location of static files or of dynamic scripts. A central site automatically harvests daily climate data into a centralized database and applications programmes produce two monthly distribution reports or formats from the daily climate database.

Analysis and Modelling Centres

Many of the analysis and modelling tasks associated with TCOs will be performed by various groups, and brought together as needed to provide synthesis products. One of the challenges will be to coordinate such a network of activities. An information system model for multi-site projects was developed by LTER (Olson et al., 1999). The core component of DIS was regarded as a set of cross-site working groups (e.g. for NPP, soils, remote sensing, atmospheric sciences, etc.). Each working group would coordinate the development of data and models associated with their particular scientific theme. The overall data and information system consists of nodes for each scientific working group plus a central node. The system is envisioned as Web-based and accessible through one or more of the popular browsers using an html-type interface to the data and information. The group leader for each scientific-domain working group provides scientific and technical leadership, and he/she would play a critical role in the development of datasets and data products for the working group. In addition, one or more technically-oriented partners for each leader will be required. The project-level data activities would be performed by a project data staff comprised of the technical partners for the groups and a leader for the project data and information system. DIS then provides access to the complete, combined, consistent data at each node (some nodes may be located physically together). There may be links to other data archives that would provide access to related projects or data. Access may be limited to data originators during the active phases of data compilation and analysis; however, as datasets become more mature, they become publicly accessible through the project’s DIS. Finally, they will be moved to a long-term archive and distribution centre.


Previous Page Top of Page Next Page