Fiscal Year 1995

FSL in Review

Facility Division


Division Home Page
Division Personnel Review Homepage

Peter A. Mandics , Chief

Objectives

The Facility Division (FD) manages the computers, communications networks, and associated peripherals that FSL staff use to accomplish their research and systems-development mission. The FSL Central Facility comprises 51 computers ranging from micros and minis to a supercomputer-class Intel Paragon Massively Parallel Processor (MPP), mass-storage devices, data-ingest interfaces, local-area networks, external communications links, and display devices. An additional 556 Internet Protocol (IP)-capable hosts serve the other five FSL divisions and the International Program. These hardware and associated software facilitate the development, testing, and evaluation of advanced weather information systems and new forecasting techniques.

The division designs, develops, upgrades, administers, operates, and maintains the FSL Central Computer Facility. For the past 16 years, the facility has undergone continual enhancements and upgrades in response to changing and expanding FSL project requirements and new advances in computer and communications technology. In addition, FD lends technical support to other federal agencies and research laboratories in weather radar development, telecommunications, and meteorological data handling.

The facility acquires and stores a large variety of conventional (operational) and advanced (experimental) meteorological observations in real time. The ingested data encompass almost all available meteorological observations in the Front Range of Colorado and much of the available data in the entire United States. Some data are also received from Canada, Mexico, and the Pacific Ocean. These data range from hourly surface observations to advanced automated aircraft, wind profiler, satellite, and Doppler radar measurements. The Central Facility computer systems are used to analyze and process these data into meteorological products in real time, store the results, and make the data and products available to researchers, systems developers, and forecasters. The resultant meteorological products cover a broad range, from simple plots of surface observations to meteorological analyses and model prognoses generated by sophisticated mesoscale computer models.


Accomplishments

Computer Facility
FSL Network
Networked Information Management client-Based User Service
Data Acquisition
Project and Research Support

Projections


Computer Facility

The Central Facility Open Systems transition was completed early in the fiscal year when the proprietary Digital Equipment Corporation (DEC) VAXcluster was decommissioned. Eighteen DEC MicroVAX processors remained to support legacy data-acquisition and display system software. Preparations have been made to re-implement some of these systems on Open Systems platforms and decommission others such as the FSL Mesonet and its VAX-based interface, DEC Pathworks PC connectivity software, and the Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) workstation. By the end of the year, most of the 280 FSL computers were Open Systems-compliant Reduced Instruction Set Computer (RISC)-based platforms.

The FSL Mass Storage System (MSS) acceptance testing and hardware integration was completed. The MSS comprises an 864-gigabyte (GB) capacity Metrum RSS-48b VHS tape robot, 84-GB rapid-access disk farm, 1.4-terabyte (TB) UniTree data-management software license, and an SGI Challenge L controller processor. Figure 15 shows the MSS hardware installed in the Central Facility main computer room. The MSS user requirements document was completed. When FD staff completed the first version of the Data Storage and Retrieval System (DSRS) functional design document describing the means for accessing the MSS, this information was presented to FSL staff. Two main MSS use categories were identified for storing large volumes of data: data and products generated by FSL researchers and developers, and data created by the Central Facility Networked Information Management client-Based User Service (NIMBUS) and the Intel Paragon MPP. FD staff initiated discussions with FSL management concern-ing the establishment and management of MSS usage quotas. Before DSRS became operational, an interim data saving facility was developed to store NIMBUS data on 8 mm tapes. Both ASCII and University Corporation for Atmospheric Research (UCAR) Unidata Program Center-developed network Common Data Form (netCDF) file formats were used for storage. A system to retrieve selected files from these tapes was also developed and placed in operation.

The Facility Information and Control System (FICS) Phase 1 design was completed. FICS makes extensive use of the World Wide Web (WWW) technology to enable FD computer operators, systems administrators, network engineers, and software development staff to monitor, control, troubleshoot, and correct Central Facility systems, application processes, and associated networks. A prototype FICS was completed in time to demonstrate the new technology and to monitor key data sources for the WFO-Advanced workstation test exercises.

FD staff set up a WWW home page for FSL and assisted other divisions in setting up their home pages. The Web proved to be a very effective mechanism for familiarizing the Internet community with FSL's work and accomplishments, and for showing sample FSL meteorological data and products. Staff recommended establishing the FSL World Wide Web Working Group and participated in its activities to develop home page guidelines and standards for the laboratory.

FSL systems administrators supported over a dozen varieties of the UNIX operating system including DEC ULTRIX, HP Domain/OS and HP-UX, IBM AIX, Intel OSF/1, SCO UNIX, SGI IRIX, and Sun Solaris and SunOS. DEC VMS, Apple Mac System 7, and Microsoft DOS, Windows and Windows NT were also supported. These operating systems and commerical applications software were periodically upgraded as new versions became available from vendors. A number of utility, productivity, and tool-type software packages were installed on FSL servers and made available for laboratory-wide use.

Computer system and network security continued to be a major concern for FSL. Several attempted break-ins were logged, but no known damage occurred. At a seminar, FD systems administration and network management staff provided guidance to FSL users on safe computing practices and presented helpful techniques for protecting system resources. Other topics included FSL's strategy for preventing intrusion from the outside and individual user responsibilities. Also, printed materials covering security and case studies were distributed at the seminar. The Security Plan for the Forecast Systems Laboratory was updated, revised, and submitted to management.



Figure 15. FSL Mass Storage System.

An Uninterruptible Power Supply (UPS) with a motor-generator set was installed at Research Laboratory No. 3 (the primary location of FSL) to improve electrical power reliability for the Central Facility. For short-duration power outages not exceeding 10 minutes, the UPS provides continuous power to computers and communications equipment located in the main FSL computer room.

FD staff chaired the FSL Technical Steering Committee (FTSC), which reviews the technical merit of laboratory equipment requests, generates procurement recommendations, and submits them to FSL management for approval. Equipment within the purview of FTSC includes workstations, servers, storage devices, and communications equipment. FD staff members represented FSL on several NOAA committees: the ERL Technical Committee for Computing Resources, the OAR High-Performance Computing and Communications (HPCC) Computing Panel, and the DOC Boulder Laboratories Network Working Group. Division staff also participated on committees that developed the FSL Software Policy and the FSL Data Policy, which provide guidance for distributing FSL software and data outside the laboratory.

A paper on effective UNIX computer systems management was presented last January at the 75th American Meteorological Society (AMS) Annual Meeting in Dallas, Texas. Three papers on Doppler weather radar calibration, clutter processing, and development of a data analyzer were prepared for presentation at the AMS 27th Conference on Radar Meteorology in Vail, Colorado, in October 1995.

...return to index

FSL Network

In response to steadily increasing FSL network user requirements, the FD network management staff continued to upgrade and expand the FSL network and its connection to the outside world via the Internet. In addition to the 262 UNIX and 18 VAX/VMS computers, there were 120 PCs and 27 Macintoshes on the FSL network.

The FSL main 100 megabits per second (Mbps) Fiber-Distributed Data Interface (FDDI) ring was upgraded (with the installation of an Optical Data System (ODS) Model 1095s Concentrator) to achieve greater reliability and better maintenance capabilities. The main ring routing was upgraded (with the implementation of the Open Shortest Path First routing protocol) for faster, more automated routing convergence and better failover response time.

The Aviation, Systems Development, Forecast Research, and Facility Divisions were all upgraded with Alantec 7000 routers. These routers provide better connectivity, throughput, and multiprotocol network access. Transparent bridging groups were enabled to provide specific multiprotocol communications among hosts at all physical and/or network locations within the laboratory. A major milestone in FSL multiprotocol backbone implementation occurred when the FSL Director's office was connected through the main FSL Cisco routers. This marked a significant network upgrade in that most of the network traffic is now handled by specialized packet-switching computers called routers. Only two FSL division subnets remain served by UNIX host gateways.

Because the National Science Foundation (NSF)-sponsored NSFnet was discontinued during the year, FSL had to obtain access to the Internet through a commercial Internet Service Provider (ISP). A 1.5 Mbps T1 link was implemented to the SprintLink ISP. However, the performance of this link was insufficiently reliable, so preparations had to be made to switch the FSL Internet connection to MCI.

The Central Facility dial-in service was upgraded with the installation and integration of a state-of-the-art Xyplex 9000 switching hub containing terminal servers and a FDDI router connected to the main FSL ring. The hub provides continuing support for interactive logins, and also handles Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP) communications through high-speed V.34 28.8 kbps modems that enable off-site users to access the FSL network as fully routed Internet Protocol (IP) nodes. To better serve FSL dial-in users, several dial-in application softwares were selected, integrated, and implemented for automated installation. Additional software was created for efficient dial-in account and configuration administration.

FSL network security was upgraded by configuring filters at external access points. The Kerberos security application was coded and implemented to automate individual account logins for the FSL dial-in service. Because this security mechanism is scalable, it will be useful in resolving future FSL security issues. The results of some of the FSL network upgrades are represented in the diagram below (Figure 16), which shows the FSL network external connections. The commercial ISPs are shown, along with several Environmental Research Laboratories and other NOAA agencies, other DOC agencies, the University of Colorado, and NCAR.



Legend
Commercial Internet Service Providers (dark gray outlined ovals, top),
Environmental Research Laboratories and Other NOAA Agencies (gray ovals),
other DOC Agencies (black ovals),
CU and NCAR (light gray outlined ovals).
Figure 16. FSL participation in the Boulder Area Multiagency Network.

The following additional network-related upgrades were performed:



Figure 17: The FSL Network Monitoring Center.

The US West Communications-sponsored Boulder Area Asynchronous Transfer Mode (ATM) Phase 1 network technology trial was completed. Participation in this exercise provided a testing ground for future FSL internal routing upgrades and valuable experience for the future implementation of ATM communications technology in the laboratory.

...return to index

Networked Information Management client-Based User Service

The Open Systems-based NIMBUS has successfully taken over the great majority of meteorological data acquisition, processing, management, storage, and distribution functions from the decommissioned FSL DEC VAXcluster. With the expansion of the processing workload, the number of production NIMBUS computers increased from six to 11. To offload the core NIMBUS processors (Ariel and Caliban), an HP-755 computer was configured to perform data distribution outside the laboratory using the UCAR Unidata Program Center-developed Local Data Manager (LDM) software. Additional memory, disk drives, and new processors were added to Ariel and Caliban to improve performance. Nine new hosts were installed to receive and process data from the new Geostationary Operational Environmental Satellite (GOES-8), to run managed processes such as the Local Analysis and Prediction System (LAPS), to provide a measure of redundancy, and to serve as NIMBUS development platforms and software repository. Figure 18 shows a block diagram of the major NIMBUS components. The total volume of data and products handled by NIMBUS has increased to 7 GB per day. Most of these data were available to FSL users in the self-describing, machine-independent, portable netCDF file format on the NIMBUS /public directory. The NIMBUS distribution service was enhanced and made more reliable by changing the base interprocess communication from User Data Protocol/Internet Protocol (UDP/IP) to Transmission Control Protocol/Internet Protocol (TCP/IP).



Figure 18: Networked Information Management client-Based User Service (NIMBUS).

The first version of the Point Data Access and Management (PDAM) library for data translators and netCDF makers was completed. Point data included Surface Aviation Observations (SAOs), Radiosonde Observations (RAOBs), Wind Profiler Demonstration Network (WPDN) data, Buoy-based measurements, World Meteorological Organization (WMO) Aviation Routine Weather Reports (METAR), Pilot Reports (PIREPs), automated aircraft reports relayed through the Aeronautical Radio Incorporated (ARINC) Communications Addressing and Reporting System (ACARS), Airman's Meteorological Advisories (AIRMETs), and FSL Mesonet. WMO Binary Universal Form (BUFR) decoders, encoders, translators, and netCDF file-storage software were generated for several of these point data, and existing translators were upgraded and improved. The BUFR translator was modified to run under Sun-OS and HP-UX UNIX operating systems. An improved netCDF station table was created by merging the FSL ASCII station table with the Air Weather Service Master Station Catalog (AWSMSC) received from the NWS National Climatic Data Center (NCDC). Staff began designing the tools for updating the station table.

NIMBUS significantly improves the ability to handle large volumes of gridded data. NetCDF makers were written for the Rapid Update Cycle (RUC) 60-km surface analysis, RUC Conterminous United States (CONUS) 60-km hybrid-b and CONUS 211 upper-air grids, and Eta model grid outputs. To facilitate efficient distribution of FSL-generated grid products, WMO GRIdded Binary (GRIB) encoders were created for icing potential derived from the RUC model and turbulence data derived from the Eta model. Division staff maintained and upgraded the GRIB decoders and encoders and made them widely available to other projects.

The Process Manager (PM) was upgraded to better manage the automated scheduling, execution, and monitoring of meteorological applications software in the NIMBUS environment. The PM notification mechanism was enhanced to provide product notification to downstream processes and text messages, via LDM, to users alerting them of netCDF file availability on /public. PM processing was optimized by filtering out unnecessary notifications at the NIMBUS Information Transport Cloud level. The following 16 new managed processes (created by staff in FD and other divisions) were integrated into the PM: RUC, WSI Corporation NOWrad, FSL Mesonet, and Eta netCDF makers; icing and turbulence GRIB encoders; Nested Grid Model (NGM) ingest; icing Aviation Impact Variable (AIV) and contours generation; icing contour BUFR encoder; turbulence AIV, CONUS NOWrad, Mesoscale Analysis and Prediction System (MAPS) surface and upper air, and LAPS product generation.

...return to index

Data Acquisition

To ensure that data-acquisition and related NIMBUS development activities were carried out in accordance with FSL user requirements, a task-prioritization mechanism was established. Every quarter, the FSL Director, Division Chiefs, and other interested parties met with lead developers in FD to review and discuss the status of all data-acquisition tasks. The meetings produced a prioritized task list that FD used in scheduling development activities to optimally meet FSL user and project data requirements.

In support of the WFO-Advanced workstation exercises, FD staff implemented software to transport narrowband Front Range Weather Surveillance Radar (WSR-88D) Doppler radar products from the Denver WSFO to WFO-Advanced through NIMBUS. Also, decoder software for the WSR-88D messages was provided to WFO-Advanced developers. Plans were developed for ingesting Front Range WSR-88D wideband Doppler radar data into the FSL Central Facility. A capability was designed and implemented to receive national radar data from the new WSI High-Capacity Satellite Network (HCSN) Data-Acquisition System (DAS). The HCSN DAS replaced the WSI Express NOWrad Plus service and provided an expanded suite of national WSR-88D-based radar products.

Staff began testing the next-generation GOES-8 direct-readout groundstation developed by FD. The new GOES-8 ingest and processing software performed well on the Open Systems platforms. As predicted by a refined satellite downlink analysis, the already-existing GOES-7 satellite antenna provided adequate signal margin for receiving GOES-8 data. Testing began on the operational software that FD developed to convert GOES Variable (GVAR)-formatted data into netCDF.

Software was generated and upgraded to store National Environmental Satellite, Data, and Information Service (NESDIS) Information Stream Project for AWIPS and NOAAport (ISPAN) satellite images in netCDF files on NIMBUS /public. In support of WFO-Advanced development and exercises, software was created to convert the ISPAN netCDF files into NESDIS Remapped GOES format and transmit them to the WFO-Advanced system.

When FSL began receiving reliable data through the GDS National Lightning Detection Network, its local lightning detection system was decommissioned. FD staff continued to operate other legacy data-acquisition subsystems that remained on MicroVAX processors, including interfaces for the FSL mesonet, Automation of Field Operations and Services (AFOS), GOES-7, and the Television and Infrared Observation Satellite (TIROS). To improve performance, staff relocated the TIROS receiver antenna and rebuilt the interface software.

...return to index

Project and Research Support

FD continued to distribute real-time and retrospective data and products to all internal FSL projects and numerous outside users. In spite of the VAXcluster decommissioning, FD successfully continued to support legacy DARE and PC workstations. For example, the FSL-developed AWIPS preprototype DARE workstation at the Denver WSFO continued to receive advanced meteorological products and satellite data backup. FSL data and products were sent to the Department of Transportation (DOT) Volpe Transportation Systems Center (TSC) Advanced Traffic Management System (ATMS) in Cambridge, Massachusetts. Similarly, meteorological data and products continued to be provided to the Aviation Gridded Forecast System (AGFS) functional prototype PC workstation deployed at the Longmont, Colorado, Federal Aviation Administration (FAA) Air Route Traffic Control Center (ARTCC) Center Weather Service Unit (CWSU). FSL mesonet data were sent to the Department of Energy (DOE) Rocky Flats Operations.

Other outside users of FSL data and products included the UCAR Cooperative program for Operational Meteorology, Education, and Training (COMET) and the Unidata Program Center. FSL data and products continued to be sent to researchers at the National Center for Atmospheric Research (NCAR) Research Applications Program (RAP) and Mesoscale and Microscale Meteorology (MMM) Division, Colorado State University (CSU), MIT Lincoln Laboratory, and Environmental Research Laboratories including Environmental Technology Laboratory (ETL), and the National Severe Storms Laboratory (NSSL). Data transmitted to these entities included FSL mesonet, Doppler radar, upper-air soundings, SAOs, profiler, satellite imagery and soundings, and MAPS and LAPS grids. Fulfilling their role as liaison, FD operations staff provided outside users with information on system status, modifications, and upgrades.

FD staff led a joint WSR-88D Radar Data Quality Optimization effort with the NCAR Atmospheric Technology Division (ATD). This project is aimed at supporting the NWS Operational Support Facility (OSF) in Norman, Oklahoma, to improve the quality of WSR-88D data. The following principal engineering support tasks were accomplished: Developed WSR-88D radar network calibration strategy to guide and expedite the consistency of radar reflectivity calibration. Also performed tests of sun-source radar calibration and consulted with OSF software engineers on the implementation of test software. Completed the final report on Suncheck software and procedures in anticipation of the OSF Build 10 WSR-88D system upgrade. Designed, implemented, and tested a prototype Archive 1 Data Analyzer (A1DA) consisting of a real-time measurement server and client analysis workstation. Delivered and installed the first A1DA at the OSF testbed radar and provided training to OSF engineers in March 1995. Developed a three-stage anomalous propagation clutter mitigation strategy for the WSR-88D network and delivered the final report and recommendations to the OSF in May 1995.

Staff made major contributions to the design, development, and operation of the Global Learning and Observations to Benefit the Environment (GLOBE) Program server employing World Wide Web technology. The server supported over 2000 kindergarten through grade 12 schools in the United States, and additional schools in 28 foreign countries.

FD staff supported the Hungarian Meteorological Service (HMS) in adapting and tailoring NIMBUS technology to meet their operational data processing requirements. Assistance was also provided to HMS staff in modifying FSL RAOB translators to decode upper-air observations (TEMPs) transmitted to HMS via the Global Telecommunication System (GTS). This collaborative effort enabled HMS to receive, process, and display many thousands of European Centre for Medium-Range Weather Forecasts (ECMWF) model grids and TEMPs, significantly improving their operational weather forecasting capabilities. Work also continued with the Taiwan Central Weather Bureau in assessing requirements and future development needs for their weather service modernization effort.

In addition to performing a large variety of network cabling, equipment setup, PC support, and mesonet maintenance tasks, FD electronics technicians provided support for FSL technical reviews, conferences, workshops, and presentations. Also, FD technicians built electronic circuitry for the Shear-Directed Balloon Experiment.

...return to index

Projections

Computer Facility

Upgrading of the Central Facility with Open Systems computers will continue. In addition to procuring new systems, most remaining VAX processors will be eliminated as a result of the decommissioning of the DARE workstations, termination of the FSL Mesonet, and replacement of the ISPAN data feeds with NOAAport.

Two DECstation 5000 servers currently provide critical mail hub and hostname/address look-up functions for the fsl.noaa.gov domain by using the Domain Name System (DNS). To accommodate the rapidly increasing volume of FSL mail and other network traffic, the DECstations will be replaced with more powerful Sun UltraServer Model 170 systems. The new systems will not only increase performance by about an order of magnitude, but will also be able to run the latest versions of DNS server and sendmail software that, in turn, will increase facility security.

Following the successful implementation of a computer time synchronization process using the Network Time Protocol (NTP) on FD hosts during this fiscal year, FSL division systems administrators will expand this synchronization technique throughout the entire laboratory. The NTP infrastructure contains numerous points of redundancy and eliminates single points of failure for increased reliability.

The FSL Mass Storage System will be upgraded with an Odetics Digital Linear Tape (DLT) tape robot. The current 1.4 TB capacity UniTree data-management software license will be increased to 10 TB. Phase I of the MSS Data Storage and Retrieval System will be implemented to store and provide access to FSL user- and MPP-generated data. Later in the year, Phase II DSRS development will address the storage and retrieval of NIMBUS data and will provide a Graphical User Interface (GUI) method of accessing the system. FD staff will work with the FSL data-user committee to determine the types of NIMBUS data to be stored on the MSS.

Phase I of the Facility Information and Control System employing a Netscape WWW server and Oracle database will be completed. Initial emphasis will be placed on providing FD operators information to monitor all real-time Central Facility systems and to efficiently resolve problems that may occur. Hypertext links will be provided in FICS to allow convenient access to system documentation. Eventually, the scope of FICS will be extended to incorporate monitoring of additional Central Facility systems, including the MSS DSRS.

FSL Network

Based on the structured wiring system and network routing equipment implemented during the last two years, the emphasis will be placed on developing automated, integrated network monitoring and operational capabilities during the coming year. The planned network activities are highlighted below.

Network Monitoring

Network Upgrades

Networked Information Management client-Based User Service

The rapidly increasing data-access load on the NIMBUS /public directory tree will necessitate the procurement of a dedicated Network File System (NFS) server. The NFS server will be integrated into the Central Facility to offload the core NIMBUS processors and to provide rapid access to large volumes of real-time NIMBUS data for all FSL researchers and developers. Additional computers will be integrated into NIMBUS to handle the increasing computational load due to new managed processes. The processing load will be redistributed throughout NIMBUS to optimize overall system performance.

The Process Manager will be upgraded by adding multiple notification support, dependency reset options, and better handling of runaway processes. The following new managed processes will be integrated into NIMBUS: CONUS satellite product generation, parallel versions of MAPS, 40-km MAPS analysis, CONUS MAPS surface, and MAPS verification. An interface will be developed between the Intel Paragon MPP and NIMBUS to efficiently transmit MPP model outputs, such as 40-km MAPS grids, to NIMBUS and make them available to FSL and outside users. The interface will include NIMBUS Information Transport (IT) client support and GRIB-encoding software on the MPP.

Implementation of the netCDF station table software will be completed, including table maintenance tools and integration into existing NIMBUS translators and netCDF storage software. The BUFR, SAO, RAOB, and Mesonet translators will be upgraded to include new Point Data Access Management infrastructure. In support of the US conversion from the SAO to the METAR surface-observation standard, appropriate METAR translator and netCDF storage software will be developed. Both SAO and METAR formats will be supported concurrently until the format conversion is completed. The AIRMET translator and netCDF storage software will be enhanced to decode and store SIGMET data.

Data Acquisition

A capability will be developed to ingest, store, process, and distribute WSR-88D wideband Doppler radar data within NIMBUS. Also, software will be created to transfer WSR-88D narrowband products from the WFO-Advanced workstation to NIMBUS. The NOWrad netCDF software will be upgraded to create netCDF files from the WSI HCSN DAS-provided WSR-88D National Mosaic radar products. These will include Composite Reflectivity, Layer Composite Reflectivity, Echo Tops, and Vertically Integrated Liquid. Software will be developed to create netCDF files from WSR-88D Velocity-Azimuth Display (VAD) wind profile data received from the National Centers for Environmental Prediction (NCEP, formerly NMC).

The development of the GOES-8 satellite data-acquisition subsystem will be completed, integrated into NIMBUS, and placed in operation. Software will be implemented to store GOES data in netCDF files, monitor system performance, and track data quality. Options will be analyzed for standard image distribution formats for locally created radar and satellite image data. After a recommended FSL standard has been submitted to and approved by management, it will be implemented. A system will be developed to distribute NESDIS satellite and other data received through the AWIPS Satellite Broadcast Network (SBN) NOAAport receive system.

The following additional model output data will be acquired from the NCEP Information Center (NIC) server and made available in netCDF file format on the NIMBUS /public directory: RUC upper-air (CONUS 211/25 mb), RUC surface and remapped surface (CONUS 211/25 mb), and remapped Eta (CONUS 212/25 mb). The ACARS automated aircraft report receiving system will be moved from a VAX-based system to NIMBUS. This effort will include the creation of new translators for ascent, descent, and level-flight data and netCDF storage software.

Project Support

FD will continue to provide Aviation Impact Variable data to the FSL AIV editor deployed at the NWS Aviation Weather Center (AWC) in Kansas City, Missouri. Similarly, FD will continue to transmit data and advanced meteorological products in support of the WFO-Advanced workstation development, test exercises, and demonstrations; the Denver WSFO DARE workstation; DOT TSC ATMS in Cambridge, Massachusetts; the FAA ARTCC CWSU in Longmont, Colorado; and DOE Rocky Flats Operations. Staff will also support researchers through the UCAR Unidata-developed Internet Data Distribution (IDD) system, and provide real-time and retrospective data to several ERLs and NCAR.

FD staff will help design a GLOBE Hub for participating countries to collect information from their schools. FD will continue to operate and provide network support for the WWW-based GLOBE data server located at FSL. This data server collects, processes, and manages environmental data from several thousand schools, and provides students and research scientists access to these data.

In continuation of the joint effort with NCAR ATD to support the NWS OSF, FD staff will lead a team to advise the OSF in implementing sun-source calibration aids for Build 10, help the OSF to incorporate anomalous propagation clutter processing upgrades targeted for Build 11, and design the first phase of an instrumentation subsystem for the OSF testbed radar. The effort will be directed at improving allocation of the WSR-88D clutter processing to minimize reflectivity losses while maintaining coverage of all precipitation and clear-air echoes.

FD staff will participate in reviewing contractor proposals for the second NOAA Scientific Workstation (SCIWOK) Contract.

Maintained by: Wilfred von Dauster