December 1996 FSL Forum
FSL Home Page
The demonstration of LAPS within the Olympic Weather Support System was important. For the first time, a mesoscale forecast model initialized with comparably high-resolution analyses was implemented in an operational environment using technology representative of that planned for the NWS forecast office in the next several years. Here we discuss the major areas of LAPS operations at the Olympics, including computer hardware selection, operational model design (model setup, implementation of FSL's Scalar Modeling System-Nearest Neighbor Tool (SMS-NNT) on the forecast model, the Regional Atmospheric Modeling System (RAMS), parallel decomposition and performance); visualization; local office implementation; model verification; creation and use of interobservations. We conclude with LAPS benefits to the local forecast office and a summary of the Olympics experience.
Test results using 81 nodes on an Intel Paragon distributed-memory, massively parallel processor (MPP) demonstrated that vast improvements in compute time could be attained at a reasonable cost. Earlier this year, IBM, as an official sponsor of the Olympic Games, agreed to provide a 30-node RS6000 Scalable Power-parallel (SP2) system as the operational compute engine for the local-domain mesoscale forecast model. The IBM SP2 was installed in the NWS forecast office at Peachtree City, Georgia, last April.
RAMS model initialization, provided by LAPS, generated surface analyses every 15 minutes and three-dimensional analyses every 30 minutes. Forecast lateral boundary conditions were provided by the National Centers for Environmental Prediction (NCEP) 29-km, national domain Eta model predictions. It is important to recognize here that the RAMS model physics were selected to complement the grid scale resolution. Therefore, a nonhydrostatic version of the model was employed with a full implementation of liquid and ice microphysics that provided an explicit prediction of precipitation, and no cumulus parameterization scheme was implemented.
Parallel Decomposition - A version of RAMS has been developed which is instrumented with the SMS-NNT library of utilities developed by the FSL High Performance Computing Group. The NNT software library was designed to minimize the code changes that must be made when parallelizing an existing geophysical model. NNT was ported to MPP hardware, conventional vector supercomputers, shared memory multiprocessors, and workstations. The model domain is decomposed in the horizontal with an area of redundant calculation (boundary region) of 4 points in each decomposed direction. The model is nonhydrostatic with a split timestep scheme for the application of the pressure tendency term to the momentum equations. For a single grid there are 26 three-dimensional arrays communicated in each timestep. The actual number of bytes communicated depends on the number of nodes in each decomposed direction, and for the Olympic configuration, is given by
where dx and dy are the number of nodes of decomposition in the corresponding horizontal direction. [A previous article on NNT was published in the May 1995 issue of the FSL Forum.]
Parallel Model Performance - Performance of the parallel model is measured using an actual forecast initialized at 0600 UTC (0200 EDT) and computed out to 9 hours on 4 August 1996. Most performance tests of previous parallel atmospheric models have used coarser resolution models and much different physical parame-terizations. In particular, the authors are aware of no other parallel models in which performance tests have been conducted using an actual forecast and explicit microphysics. This is significant, since both the single-node performance and the parallel speedup will be affected by the localized nature of convection and the iterative method used to model it.
To illustrate the importance of this process on model performance, we also tested without the microphysics, which reduces the number of three-dimensional exchanged fields from 24 to 14 and simply treats the water vapor field as an advective tracer. Figure 1 shows the speedup on up to 28 nodes from a single-node, both with and without microphysics averaged over the entire 9-hour forecast.
Figure 1. Speedup on the IBM SP2. "Diamond" = microphysics off, + = microphysics on.
Finally, the effect of microphysics on speedup is illustrated in Figure 3, which shows the speedup from one node to 28 nodes averaged every ten timesteps over the run.
Figure 2. Single-node performance on the IBM SP2. "Diamond" = microphysics off, + = microphysics on.
Figure 3. Speedup to 28 nodes as a function of forecast time. "Diamond" = microphysics off, + = microphysics on.
Three-dimensional visualization of the RAMS predictions has been successfully demonstrated at FSL as another method to rapidly analyze the model output. RAMS forecasts were stored every 10 minutes in a format readable by the IBM three-dimensional Data Explorer visualization system. Three-dimensional time animations of model output were available to the forecasters and to the Olympic World Wide Web homepage (an example is shown in Figure 4).
Local Office Implementation - Since the complete LAPS system is intended to function wholly in the local forecast office, an additional design requirement is that the system be as automated as possible. If the system is to be deployed at numerous forecast offices, the human resources needed to run the system must be kept to a minimum. Representatives from FSL and IBM were present during the operational phase of the Olympic weather support to troubleshoot any problems with the LAPS-RAMS system. This human presence proved beneficial as the last few problems were resolved during the first several days of the Games. After this time, the LAPS-RAMS system required very little human interaction outside of the designed local control. As further testament to the minimal amount of required human attention, the LAPS-RAMS system continued to operate for three weeks following the Olympic Games in support of the Paralympic Games, during which no FSL or IBM representatives were present and the system had no software failures.
Model Verification - Model validation was conducted through two approaches: quantitative and qualitative. Quantitative model validation was performed automatically on a variety of surface variables including temperature, dew point, and wind. Surface observations were available from the standard NWS observation network and from a special automated network assembled specifically for Olympic Games support. Approximately 70 surface observations were typically available for comparison with model output that was interpolated to each surface observation location. Because differences exist between the low-level RAMS model height (48 m three hours so that the comparisons are displayed at AGL) and surface observation elevation, several adjustments were made to the model output. The Louis similarity theory (references available) is used to adjust model temperatures and wind speeds to the surface temperature observation level of 1.5 m and the surface wind observation level of 10 m. An additional adjustment is made to the model temperature using a standard lapse rate of -6.5 K km-1 to account for any difference between the model terrain height and the surface observation elevation at the observation location. No adjustments were made to the model moisture variable. Bias and RMS statistics were computed using every location where surface data were available. Spatial and temporal quality control of the observations was completed by the LAPS operational system.
Figure 4. Three-dimensional view of RAMS 15-hour forecast of surface winds (arrows), total precipitation, clouds (gray shading), and reflectivity (blue shading) valid at 2100 LT 4 August 1996. Figure prepared using IBM Data Explorer software.
Courtesy Lloyd Treinish, IBM.
Figure 5. Hourly bias and RMS differences of a) temperature (K) and b) dew point (K), and hourly RMS differences of c) wind vector difference (m s-1) averaged for a period 2 July-24 August 1996. Differences are computed by subtracting surface observations from model forecasts (RAMS-solid line; 29-km Eta-long dashed, 10-km Eta-short dashed). Model initialization time was 0600 UTC for RAMS and 0300 UTC for Eta. Plotted Eta results are displaced by three hours so that the comparisons are displayed at common forecast valid times (UTC).
Quantitative RAMS model validation results are presented in Figure 5 with a comparison to statistics computed from the 10- and 29-km Eta model forecasts provided by NCEP. Similar adjustments to account for differences in model and actual observation heights were completed by NCEP prior to the arrival of the forecasts at Peachtree City. The results are an average of all Eta forecasts initialized at 0300 UTC and all RAMS forecasts initialized at 0600 UTC from 2 July through 24 August 1996. The plotted Eta results are displaced by common forecast valid times.
The bias and RMS results indicate a significant improvement with RAMS compared to both 10- and 29-km Eta through 1200 UTC for temperature, dew point, and wind. Significant improvements continue to be noted through 1500 UTC for dew point and RMS improvements of 0.4-1.0 m s-1 are evident through the entire forecast period for wind. Several experimental forecasts were conducted after the Games in an attempt to explain the cool temperature bias after 1200 UTC (0800 LT). These simulations suggested that the radiation scheme used by RAMS did not properly mix out the boundary layer in the late morning. The overall improvements are likely the result of two features: the improved initialization of RAMS by LAPS that incorporates local data sources which led to improved very short-range (0-6 hour) predictions, and more sophisticated model physics such as the microphysics and soil model.
The qualitative model evaluation applies the meteorologists' analysis skills and experience with mesoscale weather phenomena to subjectively compare model predictions with physical observations and other visual accounts of the weather. Although not as rigid as a quantitative approach, the qualitative evaluation is useful for subjective comparisons with alternative data sources and other model output. For this article, the qualitative evaluation is the best method for investigating the model's performance of mesoscale precipitation predictions. This important forecast is difficult to evaluate quantitatively due to a lack of high temporal and spatial scale observations, but the operational forecasters can provide subjective insight into the model's performance through comparisons with the radar, satellite, and human observations.
Personal communication with the operational forecasters indicated that, in general, the location and timing of the RAMS precipitation forecasts were quite good. However, there was a very large overprediction of precipitation amount associated with convection. This is likely the result of the 8-km grid spacing being insufficient to fully resolve the air mass thunderstorms typical during the Georgia summer. The capability of RAMS to even represent mesoscale convection was a notable improvement over the other available forecast models. Added value was also recognized from the ability to restart the model every three hours. Two benefits were evident from this strategy. First, the early morning initialized predictions were often incapable of predicting the subtle mesoscale surface forcings that are important to the prediction of afternoon convection in the tropical environment. But, once these features started to be detected in the later morning LAPS analyses, the RAMS forecasts were able to "latch onto" these features and generate reliable convective guidance. Second, as the forecasters observed common features in repeated predictions, they became more confident using these particular forecast features.
The 0900 UTC, 2-km RAMS predictions were designed to enhance the detailed sea breeze forecasts required by the yachting venue. Local buoy observations of sea surface temperature were used in the RAMS model initialization. Special point wind forecasts were generated at half-hour increments for the two yachting event locations. The operational yachting forecasters noted that the timing, penetration, and direction of the sea breeze were well forecast by RAMS. A common theme expressed by all the forecasters was that the RAMS forecasts by themselves were generally good, but the predictions were most useful when viewed in combination with all other available guidance.
FSL researchers and staff at the Olympic Weather Support Office conceived the idea to "manufacture" observations to take advantage of the advanced technology. To accomplish this, LAPS analyses were interpolated to the appropriate locations (hence "interobservations," or "interobs"). LAPS variables (such as temperature, relative humidity, and clouds) were combined with other data (such as radar, lightning, and nearby surface observations) to produce the final interob. The set of interobs was transmitted on the Olympics telecommunications network, Info 96, via a dedicated phone line. As well as providing weather information at each venue, the interobs were also used for real-time quality control of the mesonetwork data, since their parent LAPS analyses combine data from the NWS Automated Surface Observing System (ASOS) and other high-quality instruments.
LAPS provided a good starting point for creating the interobs, with access to almost 60 Aviation Route Weather Reports (called METARS), 50 Olympic mesonet stations, a profiler, and WSR-88D Doppler radar data. The data ingest component of LAPS, run on an HP 755 work-station, took data directly from the Olympic Weather Support System surface observation database. For each LAPS cycle, the data were quality checked, then analyses were created of the standard meteorological variables (such as wind and temperature) and derived quantities (such as parcel buoyancy and heat index). Once the LAPS analyses were available, selected variables were interpolated to the desired venue locations using a bi-cubic spline technique. The basic variables were temperature, dew point, relative humidity, wind, and cloud amount converted to the standard cloud categories (clear, scattered, broken, and overcast). These data were then transferred to the OWSS observation database. Interobs were available from LAPS every 15 minutes.
Other data were included to provide values for variables that LAPS did not analyze, or for the few times that LAPS analyses were not available: METAR, SCIT, and NLDN. METAR observations provided cloud, current weather, and precipitation information in addition to temperature, moisture, pressure, and winds. The Olympic mesonet stations all reported temperature, winds, and moisture, and most sites also had a rain gauge. Quality control on the surface stations was generally left to the LAPS software. WSR-88D radar data were incorporated into the interobs using output from the National Severe Storms Laboratory's Storm Cell Identification and Tracking (SCIT) algorithm, which defined storm cells as having at least two consecutive elevation scans with 10 km2 area of reflectivity 30 dBZ or greater. For each identified storm cell, the radius of the 40 dBZ echo was approximated by a circle. If a venue location was within this circle, rain was reported at the venue, providing clouds were also being reported by satellite or a neighboring METAR site. National Lightning Detection Network (NLDN) data were incorporated into the surface observation database by counting the number of strikes within 10 statute miles of each venue. The NLDN records cloud-to-ground strikes with an accuracy of 5 to 10 km in the Southeast U.S., and a detection efficiency of 60-70%. If more than one strike occurred within 10 miles of a venue, "thunder in the vicinity" was included in the interob. If a simultaneous precipitation measurement was made, then a thunderstorm was reported.
All available information was combined to create the interobs. The data needed to be carefully integrated to make sure that the products were consistently and meteorologically correct. First, any information not provided by LAPS was collected from a preselected list of neighboring METAR and mesonet observations, or lacking those observations, previous interobs. Once all the necessary variables were filled, consistency checks were performed. For example, dew point and relative humidity were tied to the temperature, and the wind gusts were tied to wind speed to prevent gusts lower than the speed.
The most difficult field to determine was the current weather, especially rain and thunderstorms. Even though most of the mesonet sites had a tipping-bucket rain gauge, there were instances (usually in light rain situations) where the accuracy of precipitation data was questionable. If it was known to be raining at the site, we needed to report rain whether or not it was reported by the rain gauge, so the radar data and neighboring observations were used. If SCIT determined rain at a venue site, that would override the tipping bucket report. If a nearby METAR also reported rain, then a guess at the precipitation intensity at the venue would be made. This seemed to work well at identifying rain before the mesonet rain gauges could report precipitation. However, light rain events were not well handled due to the high 40 dBZ radar threshold. Another current weather condition of concern was fog. The interobs only reported fog when the relative humidity was greater than 99%, and a nearby METAR reported visibilities less than one mile. Obviously, this limited fog reports in the interobs to only the most serious cases.
The interobs worked well and proved to be a useful way to provide weather information to the Atlanta Committee for the Olympic Games. Although probably of limited usefulness in data-rich areas, this method can be valuable in areas without a nearby observation. Care should be taken in trying to use interobs to represent meso-g-scale phenomena; for example, to determine localized minimum temperatures during radiational cooling events. Since interobs originate with an analysis, they will always have the smallest-scale signals dampened out. Nevertheless, interobs were shown to be a practical concept during the Olympics. With additional efforts to overcome the weaknesses described here, representative interpolated observations may prove useful for more standard applications. Consider, for example, a town within a tight frontal gradient located between two sensors. Use of an interob would result in a better, more representative temperature for that town as opposed to assigning it a temperature from one or the other official sensors. Such a practice might offend data purists, but the real-world customers would likely be more satisfied.
Locally produced weather analyses and predictions greatly reduce the amount of required communications, from both a data collection standpoint and a model output dissemination standpoint. Local data sources, which may not be available to the NCEP central facility in a timely fashion, can be incorporated into local analyses and prediction. The volume of model output continues to grow exponentially in combination with expanding computer hardware capabilities. Communication of model output over long distances to another computer platform is not necessary when the model runs locally on the same network of computers that controls the operational meteorological workstation. This also eliminates the problem of degrading the frequency and resolution of the model output that frequently occurs when disseminating model data from the NCEP central facility to a local office. Hence, the whole flow of data (from collection into the local analyses, to model visualization) occurs in one location in a timely fashion.
Finally, we want to emphasize that the locally produced local-domain numerical weather prediction effort is not intended to replace any guidance that is available from the NCEP central modeling facility. The local-domain forecasting support is designed to provide an additional mesoscale forecast tool to the suite of products already available on the meteorological workstation. The experiences at Peachtree City and Savannah have successfully demonstrated this synergy.
Editor's Note: A complete list of references is available in a paper by John S. Snook, Zaphiris Christidis, James Edwards, and John A. McGinley titled "Forecast Results from the Local-Domain Mesoscale Model Supporting the 1996 Summer Olympic Games," to be presented by John Snook at the AMS annual meeting next February in Long Beach.
(John Snook, Peter A. Stamus, and James Edwards are scientists in the Local Analysis and Prediction Branch (Forecast Research Division), headed by John A. McGinley. More information is available on the Internet on this topic at http://www.fsl.noaa.gov. )
Maintained by: Wilfred von Dauster