Fiscal Year 1995

FSL in Review

Model Verification and Production Assistance Branch


Review Division Page
Division Personnel Review Homepage

Patricia A. Miller , Chief

Objectives

The Model Verification and Production Assistance Branch assures the reliability, accuracy, and future production of the AGFS through software configuration management, model verification, and high performance computing. These activities are organized and accomplished within three work groups: the Software Configuration Management Section, the Model Verification Section, and the High Performance Computing Section.


Accomplishments

Software Configuration Management Section
Model Verification Section (MVS)
High Performance Computing Section (HPCS)

Software Configuration Management Section

The Software Configuration Management Section (SCMS) provides software configuration management and production integration testing for models and assimilation systems producing high-resolution analysis and forecast grids at FSL and at the NWS National Centers for Environmental Prediction (NCEP). The models and assimilation systems run on a variety of platforms, including UNIX workstations, parallel vector processors, and massively parallel processors. The staff are responsible for documentation, implementation, tracking, and reliability of production codes running at FSL and NCEP.

Software management procedures are designed to satisfy two goals:

Accomplishments

Software management procedures were implemented for the Mesoscale Analysis and Prediction System (MAPS) at FSL and the Rapid Update Cycle (RUC) at NCEP. (MAPS, the development system for the RUC, runs quasi-operationally at FSL, and the RUC runs operationally at NCEP.) Last year, the software management procedures were designed by staff from the SCMS and the Forecast Research and the Facility divisions in collaboration with staff from the Environment Modeling Center and Central Operations Facility at NCEP. (Figure 26 shows a diagram of software configuration management procedures for the MAPS/RUC systems.) A written plan detailing the procedures was signed by FSL and NCEP managers, and implementation began during this fiscal year. Some of the major accomplishments included:

Using the implemented code and procedures, SCMS staff also worked with staff in the Forecast Research Division to release three new versions of the MAPS/RUC system. These releases provided significant results: improved precipitation forecasts, a corrected known dry bias in the moisture analyses, improved treatment of sea-surface and lake-surface temperatures, and corrected biases in previous MAPS/RUC versions that had caused the overdevelopment of strengthening surface cyclones and low-level jets. Prior to releasing each version, verification statistics proved it to be more accurate than the previous version, and production integration testing ensured its reliability. All releases included documentation updates as well as software upgrades.



Figure 26. Diagram of software configuration management procedures for the MAPS/RUC systems.

Projections

During next year the SCMS plans to:

...return to index

Model Verification Section (MVS)

Objectives

The MVS conducts statistical evaluations of the accuracy of high-resolution analyses and forecasts of state of the atmosphere variables (SAVs), such as temperature and winds, and aviation-impact variables (AIVs), such as icing, turbulence, and visibility. SAVs and AIVs produced by the Local Analysis and Prediction System (LAPS), the NCEP Eta model, and the MAPS/RUC upper-air and surface systems are evaluated in retrospective and real-time studies.

Accomplishments

An analysis of the results from the Aviation Division's second evaluation (E2) of SAVs and AIVs was completed. E2 was a retrospective exercise conducted using an independent verification dataset obtained from the Storm-scale Operational and Research Meteorology Fronts Experiment Systems Test (STORM-FEST), from 1 February through 15 March 1992. Fifteen SAVs and 12 AIVs were evaluated for each of the above gridded systems (Table 2). In general, SAVs were the most accurate grids evaluated, particularly analyses and forecasts of heights, temperatures, and winds. Analyses and forecasts of moisture were less accurate, a phenomenon that often adversely affected the derivation of moisture-based AIVs such as icing and clouds. However, in many cases, significant improvement was shown over verification results obtained during the Aviation Division's first evaluation of SAVs and AIVs. Cloud amount diagnosis between scattered and broken, for example, was improved from the first evaluation, as was the precipitation amount. Figure 27 shows the distribution of observed clouds and LAPS analyses of clouds from the E2 exercise.



Figure 27. Distribution of observed total clouds versus LAPS analysis of the STORM-FEST domain total clouds.

The MVS also began the development of a Real-Time Verification System (RTVS) designed for the long-term statistical verification of SAV and AIV grids. RTVS capabilities include real-time data ingest, quality control, and grid-to-observation interpolation. As data become available in real time, the RTVS accesses observations and model grids, interpolates the model data to the observation locations, and saves the information necessary for statistical computations. Saved data can then be accessed through the RTVS Interactive Verification and Visualization System (IVVS).

Real-time verification of model-based, gridded forecasts of aircraft icing potential was implemented as part of the RTVS. This accomplishment is significant because long-term studies can now be made of the accuracy of numerical algorithms designed to diagnose and predict in-flight icing potential. Indeed, long-term studies will provide greater accuracy and more versatility for statistical verification analysis. The RTVS, for example, allows its users to select the verification periods and statistical analyses desired, including daily, weekly, and monthly accumulations of statistical parameters. Figure 28 shows the IVVS and a time-series plot of daily Probability of Detection (POD) statistics for analyses of icing potential generated by the MAPS/RUC system.

Specific accomplishments included:

Projections

The MVS plans to:



Figure 28. The RTVS Interactive Verification and Visualization System and a time-series plot of daily Probability of Detection (POD) statistics for analyses of icing potential generated by the MAPS/RUC system from 9 October-20 November 1995.

...return to index

High Performance Computing Section (HPCS)

Objectives

The HPCS guarantees continued progress toward higher-resolution analyses and forecasts by porting FSL and NCEP models to massively parallel processors (MPPs), the supercomputers of the future. MPPs offer a less costly alternative to traditional computers for the fast, efficient production of numerical weather prediction grids.

To achieve MPP ports, HPCS scientists have developed a high-level library, the Scalable Modeling System (SMS), which significantly enhances the ability to develop parallel finite-difference weather models and provides source code portability between a large subset of existing MPPs (Intel Paragon, IBM SP2), UNIX workstations (IBM RS600, SGI Challenge, Sun multiprocessors), and parallel vector processors (Cray YMP/C90).

SMS has two components: the Nearest Neighbor Tool (NNT) and the Scalable Runtime System (SRS). NNT is used to code regular grid-based weather prediction models for portable execution on parallel, sequential, and vector computers with minimum impact to the original code; SRS is a support system that provides scalable I/O and other system services.

Users of SMS gain three fundamental benefits: ease of programming, portability, and high performance. SMS provides a set of routines with Fortran 77 binding to decompose data arrays, execute I/O and communication operations, and to simplify the parallelization of sequential loops and the optimization of their execution. Source codes written using SMS are fully portable, and so are the data files. Since SMS is implemented as a layered set of routines, machine-dependent optimizations are possible and invisible to the user. More information on SMS is shown below (Figure 29).



Figure 29. A Web page describing the components and ports of the Scalable Modeling System.

Accomplishments

HPCS scientists continued to develop and enhance the functionality and portability of SMS. In addition, they served on the Message Passing Interface Forum, a group composed of academic and industry leaders in MPP technology who are developing a standard for message passing to provide the low-level communication software required by operating systems and tools like SMS. Finally, the HPCS continued the process of parallelizing FSL and NCEP model codes. Specific accomplishments include:

Performance results for 60-km, 40-km, and 20-km versions of the MAPS/RUC model running on the Intel Paragon and on an 8-processor SGI challenge are shown in Tables 3-7. (The 40- and 20-km runs used the 60-km model sized for higher resolutions.)

Projections

The HPCS plans to:


Tables 3-7. Performance results for 60-km, 40-km, and 20-km SMS versions of the MAPS/RUC forecast model on a 208-processor Intel Paragon and an 8-processor SGI Challenge. The "60-40" model is the 60-km model sized at 40-km; the "60-20" model is the 60-km model sized at 20-km. NPROCS is the number of processors used in the test; Total is the total run time (in seconds); Speedup is calculated as Ts/Tn where Tn is the total time for N processors and Ts is the total time for the smallest number of processors used. Efficiency is calculated as (Speedup x S)/N where N is the number of processors and S is the smallest number of processors used. Times shown include I/O, program load, and program exit times.


...return to index

Maintained by: Wilfred von Dauster