ObjectivesThe Facility Division (FD) manages the computers, communications and data networks, and associated peripherals that FSL staff use to accomplish their research and systems-development mission. The FSL Central Facility comprises 50 Sun Microsystems, Inc., Silicon Graphics, Inc. (SGI), and Hewlett-Packard (HP) computers ranging from workstations and servers to supercomputer-class Intel Paragon XPS/15 and SGI Origin 2000 Massively Parallel Processors (MPPs). The facility also contains a variety of meteorological data-ingest interfaces, storage devices, local- and wide-area networks, communications links to external networks, and display devices. Over 500 Internet Protocol (IP)- capable hosts and network devices serve the other six FSL divisions. They include approximately 200 UNIX hosts, 85 PCs and Macintoshes, 30 X-terminals, and 120 network routers, hubs, and switches. These hardware and associated software enable FSL staff to design, develop, test, evaluate, and transfer to operations the advanced weather information systems and new forecasting techniques.
The division designs, develops, upgrades, administers, operates, and maintains the FSL Central Computer Facility. For the past 19 years, the facility has undergone continual enhancements and upgrades in response to changing and expanding FSL project requirements and new advances in computer and communications technology. In addition, FD lends technical support and expertise to other federal agencies and research laboratories in meteorological data acquisition, processing, storage, and telecommunications.
The Central Facility acquires and stores a large variety of conventional
(operational) and advanced (experimental) meteorological observations in real
time. The ingested data encompass almost all available meteorological
observations in the Front Range of Colorado and much of the available data in
the entire United States. Data are also received from Canada, Mexico, and the
Pacific Ocean. The richness of this meteorological database is illustrated by
such diverse datasets as advanced automated aircraft, wind profiler,
satellite, Global Positioning System (GPS) moisture, Doppler radar
measurements, and hourly surface observations. The Central Facility computer
systems are used to analyze and process these data into meteorological
products in real time, store the results, and make the data and products
available to researchers, systems developers, and forecasters. The resultant
meteorological products cover a broad range of complexity, from simple plots
of surface observations to meteorological analyses and model prognoses
generated by sophisticated mesoscale computer models.
The capabilities of the FSL Mass Store System (MSS) used to store meteorological data, products, and user files were significantly enhanced during the year. The original MSS comprised an SGI Challenge L server with 58-gigabyte (GB) disk cache for interim data storage and a 2.6-terabyte (TB) Quantum ATL tape robot with three Digital Linear Tape (DLT) 2000 tape drives. The MSS software includes the FSL-developed Data Storage and Retrieval System (DSRS) user interface and the UniTree, Inc. Hierarchical Storage Management (HSM) system software. To upgrade the MSS storage capacity, the UniTree software license was increased from 10 TB to 25 TB. By the end of the year, over 10 TB of user and ingested data were stored on the MSS. To provide faster service for FSL users and accommodate the larger data volumes, the SGI Challenge MSS server was upgraded to eight MIPS R4400 processors. A new StorageTek Model 9740 Timberwolf tape library (containing 6 DLT 7000 tape drives and 494 tape slots with an ultimate storage capacity of 20 TB) and a stand-alone DLT 7000 tape drive were added to the MSS. The DLT 7000 tapes have 35 GB native storage capacity compared to 10 GB of the DLT 2000 tapes used in the original ATL tape robot. The new tape library (Figure 24) provided faster data access for FSL users needing to store and retrieve large volumes of data. The Systems Administration staff continued to work on numerous UniTree software problems and shortcomings. In coordination with the vendor, they developed appropriate solutions or work-arounds for a number of UniTree problems. During the second half of the year, the MSS stored 25 GB of Networked Information Management client-Based User Service (NIMBUS) -processed and NOAAPORT-provided data every day.
Two notable enhancements to the Real-Time NIMBUS Data Saving (RTNDS) system became feasible as a result of the MSS hardware upgrades. All RTNDS-stored data on the MSS were made accessible via the MSS DSRS utilities, and a capability was added for saving RUC-2 hybrid B and surface GRIdded Binary (GRIB) files.
Figure 24. The new high-capacity StorageTek tape library (left), with the door open showing the robot mechanism and tape cartridges (right).
Good progress was made on the FSL Data Repository (FDR) project during the past year. The main objective of the FDR project is to develop a prototype system for long-term archival of all meteorological data acquired by FSL on the MSS. An important component of FDR involves the acquisition, organization, and storage of the associated metadata, such as station and sensor information. The focus of the initial FDR effort regarded storing the operational National Weather Service (NWS) data received through the NOAAPORT satellite broadcast system, along with selected local radar data. The preliminary FDR design addressed three major aspects of the system: data storage, retrieval, and retrospective product generation (Figure 25). Since the first version of the FDR data storage system was implemented in January 1998, it has been saving all available NOAAPORT and KFTG WSR-88D narrowband (Level 3) radar data on the MSS. In anticipation of the development work needed for data retrieval and retrospective product generation of FDR-stored data, staff from the Facility Division and Systems Development Division manually generated an AWIPS- workstation-displayable case study for the October 1997 Denver-Boulder blizzard event. To automate the case-generation capabilities of the FDR, staff initiated a collaborative development project with the UCAR Cooperative Program for Operational Meteorology, Education, and Training (COMET). By using FSL-supplied software and FDR-stored data, COMET staff started the development of an automated case- generation capability.
The centralized FSL computer backup system (consisting of a Sun Enterprise 3000 server, a Breece Hill DLT robot and Veritas Netbackup software) was placed into operation. The system, designated as the Media Center, allows system administrators to schedule automatic, unattended backups for client computers across the entire FSL network. In addition to scheduled backups, manual backup operations are also possible. During the past year, daily incremental backups from approximately 200 FSL computers amounted to about 150 GB on average. The weekly and monthly full backups totaled 4 TB and 2 TB, and were retained for 1 month and 3 months, respectively. Because the Media Center Netbackup software created a detailed inventory of the backup tapes, Operations staff were able to restore needed files quickly and efficiently. The Media Center was also used to retrieve and distribute data on a variety of media, including 8-mm Exabyte, 4-mm DAT, and DLT tape formats.
Figure 25. Schematic of the prototype FSL Data Repository (FDR).
The Central Facility SGI Origin2000 computer server was upgraded to 32 processors, 5 GB of memory, 140 GB of on-line disk space, and an ATM OC-3 (155 Mbps) network interface. Experimental and operational model runs were made on the SGI server using a number of meteorological forecast models, including the 40-km resolution Mesoscale Analysis and Prediction System (MAPS), the Scalable Forecast Model (SFM), and the Quasi-Nonhydrostatic (QNH) model.
Staff continued enhancements of the real-time Central Facility monitoring system, called the Facility Information and Control System (FICS), employing Web and Intranet technologies. To handle the increasing and more extensive systems monitoring demands on FICS, the outdated Sun 670 MP server was replaced with a Sun Enterprise 450 server. It is expected that the new hardware and replacement of the Netscape Web server with the more capable Apache server will enable FICS to provide better monitoring capabilities and be extensible to support additional monitoring functions.
Specific FICS upgrades included:
Figure 26. Central Facility environmental monitor page on the Web-based Facility Information and Control System.
The FSL system administrators continued to support a large variety of UNIX operating systems including HP-UX, IBM AIX, Intel OSF/1, SCO UNIX, SGI IRIX, Linux, and Sun Solaris and SunOS. DEC VMS, Apple Mac System 7, and Microsoft DOS, Windows 3.X, Windows 95, 98, and Windows NT were also supported. These operating systems and commercial applications software used by FSL were periodically upgraded as new versions became available from vendors. Additional utility, productivity, and tool-type software packages were installed on FSL servers and made available for laboratory-wide use.
In anticipation of the limited computer space available after FSL moves to the new David Skaggs Research Center, FD staff started mounting computer equipment in racks to save floor space. Figure 27 shows some of the rack-mounted Central Facility equipment.
The FSL Network Team upgraded the FSL network backbone to the high-speed Asynchronous Transfer Mode (ATM) technology (Figure 28). The new switched (not shared) ATM network backbone provides an order-of-magnitude improvement from the previous shared 100 megabits per second (Mbps) Fiber-Distributed Data Interface (FDDI) ring. The FSL FORE Systems ASX-1000 ATM backbone switch is currently capable of up to 10 gigabits per second (Gbps) aggregate bandwidth and provides switched connectivity at OC-3 (155 Mbps) and OC-12 (622 Mbps) speeds. All four FSL divisions (Aviation, Facility, Forecast Research, and Systems Development) with FORE Systems PowerHubs now connect directly to the new ATM backbone. The other three divisions connect to Cisco routers that are attached to the high-speed ATM backbone.
Figure 27. Rack-mounted Central Facility computer equipment, including NIMBUS processors, network monitor, and mail-relay servers.
The two main FSL Cisco routers that provide network connectivity to several divisions, some laboratory-wide computing resources, and FSL's Internet service provider have been upgraded from model 7000 to model 7507. This upgrade, consisting of faster backplanes and substantially faster Central Processing Units (CPUs), provided a major increase in overall network throughput. The upgrade also corrected several problems caused by the old routers not being able to handle the increased network traffic from the new ATM backbone.
The FSL Network Team implemented two major network connectivity upgrades in the Systems Development Division (SDD). The FORE Systems PowerHub chassis was expanded from five slots to ten, and 12 new Ethernet network segments were added. A FORE Systems ASX-200BX Workgroup ATM switch was installed to provide high-speed switched network connections for division resources, including the SDD PowerHub. In turn, the SDD ATM switch was connected to the FSL backbone via two load-balanced OC-3 (155 Mbps) links.
The FSL Network Team worked with the NOAA-Boulder network staff on a proposal submitted to and funded by the NOAA Office of High Performance Computing and Communications (HPCC). The objective was to test a new ATM protocol, called the Multi-Protocol Over ATM (MPOA), on FSL's FORE and Cisco Systems network equipment. The MPOA protocol enables the establishment of shortcut routes over an ATM network that result in faster throughput for persistent connections. By the end of 1998, MPOA was successfully tested on FORE Systems equipment. Cisco equipment will be tested in early 1999 as the equipment becomes available. MPOA will be widely used in the data network deployed at the new building.
Figure 28. Schematic of the FSL ATM backbone network.
The Internet connection failover capability that was established in Fiscal Year 1997 between the NOAA-Boulder and FSL networks was modified. The reconfiguration allowed the failover capability to continue to work following the NOAA-Boulder network's move to a higher-speed (6 Mbps fractional T-3) link with the UUNET Technologies, Inc. Internet Service Provider. FSL was able to automatically switch over to the UUNET connection in the event its Cable and Wireless (formerly MCI) Internet connection failed. FSL also continued to provide the same failover capability for the NOAA-Boulder network. The Internet failover happens in a reasonably short time, usually completing in three to five minutes. The time delay relates to the need for the entire Internet to process information on the routing change.
FSL's remote dial-in access capability was enhanced by upgrading the 16 AT&T Paradyne 28.8 kbps analog modems to 33.6 kbps.
The deployment of the Litton Planning Research Corporation (PRC) Satellite Broadcast Network (SBN) NOAAPORT Receive System (NRS) was completed in the Central Facility. The NRS allowed the Central Facility to distribute the NWS NOAAPORT operational data stream throughout the laboratory without burdening the resources of the WFO-Advanced project. Through the use of the LDM distribution mechanism, staff provided data from all three NOAAPORT channels [NWS Telecommunications Gateway (NWSTG), Geostationary Operational Environmental Satellite (GOES) East, and GOES West] to FSL projects and users. The NIMBUS system also switched from the decommissioned ISPAN data source to NOAAPORT.
Enhancements were made to FSL's GOES data-acquisition systems, as follows:
• The GLGS ingest and processing system was configured to receive GOES-10 data in support of the GOES Science and Operations Test. To enable the concurrent acquisition of data from GOES-8, 9, and 10 during the test period, a microwave transmission link was established to ingest the still operational GOES-9 data from the Space Environment Center's antenna.
• The GLGS was switched to acquire data from GOES-10, following the failure of the GOES-9 satellite. Appropriate satellite products were created and provided in netCDF file format on /public.
• The transmission of the NESDIS [National Environmental Satellite, Data, and Information Service] GOES-derived winds and processed sounding files was initiated using the file transfer protocol, and were made available on /public.
• A data monitor for checking data quality was developed, along with other GLGS enhancements.
Improvements were made to FSL's aviation-related data systems, as follows:
• The NIMBUS BUFR [Binary Universal Form for Representation] meteorological data translator was enhanced to properly handle the new Meteorological Data Collection and Reporting System (MDCRS) data, which are encoded by ARINC under an NWS contract. The enhancement was necessary because the MDCRS format uses some advanced features of BUFR. NetCDF maker software also had to be developed to provide the MDCRS data on /public. FSL received and translated approximately 30,000 MDCRS observations every day.
• A convective SIGnificant METeorological (SIGMET) information data translator and netCDF maker were developed and implemented in NIMBUS to support the Aviation Division verification projects. These data were then made available in netCDF format on /public. Additional enhancements were made to the nonconvective SIGMET translator and netCDF maker software for improved decoding and data availability.
The following tasks were performed to provide additional gridded data to FSL
• The transmission of RUC-2 surface CONUS211 grids from NCEP was initiated in support of the Aviation Division Verification Branch. These data also were provided in netCDF format on /public.
• The Regional Observation Cooperative (ROC) -domain Local Analysis and Prediction System (LAPS) grids were copied from the Forecast Research Division to /public. and encoded into GRIdded Binary (GRIB) format.
Additional data-acquisition efforts included:
• The Global Positioning System (GPS) Surface Observing System (GSOS) netCDF files were transmitted from the Demonstration Division and provided on /public..
• The Local Data Acquisition and Dissemination (LDAD) mesonet and hydrological netCDF files were acquired from WFO-Advanced through LDM and made available on /public..
Networked Information Management client-Based User Service
NIMBUS is the Central Facility real-time, event-driven meteorological data processing and management system. It is implemented on 16 Sun, SGI, and HP computers in a multiplatform, Open-Systems environment (Figure 29). Additional processors support NIMBUS development, software testing, and integration. NIMBUS performed all Central Facility meteorological data acquisition, processing, management, storage, and distribution functions. It processed over 25 GB of conventional and advanced meteorological data daily.
During Fiscal Year 1998, the NIMBUS Process Manager that manages data- processing applications was enhanced to include upgraded configuration management capabilities and the addition of remote copy (rcp) capabilities to the postprocessing software.
The Central Facility provides data distribution services both within and outside of FSL. Division staff assisted the laboratory in upgrading the LDM clients to Version 5. They also added the following datasets to outside data distributions: NOAA Profiler Network 6-minute wind profile netCDF data files to UCAR Unidata Program, GOES extended sector imagery to CDC, United Airlines ACARS turbulence reports to the National Center for Atmospheric Research (NCAR) Research Applications Program (RAP), and United Parcel Service ACARS ascent, enroute, and descent reports to UCAR.
Some of the outdated NIMBUS processors, including several Sun 670MP servers, were replaced with up-to-date hardware. The configuration of NIMBUS software was substantially redistributed to take full advantage of the new hardware.
Investigation of the Common Object Request Broker Architecture (CORBA) technology for use in NIMBUS continued. The evaluation of the IONA Technologies Orbix CORBA product and the free TAO (The ACE ORB) Object Request Broker (ORB) showed that CORBA technology will be very useful for meeting the anticipated upgrade needs of NIMBUS. However, the currently available products are not mature enough to proceed with a full-scale CORBA development.
Figure 29. Diagram of the upgraded Central Facility NIMBUS.
Project, Research, and Laboratory Support
The Facility Division continued to distribute real-time and retrospective data
and products to all internal FSL projects and numerous outside groups and
users. External recipients include:
• NWS Aviation Weather Center in Kansas City.
• UCAR COMET and Unidata Program Center.
• NCAR RAP and Mesoscale and Microscale Meteorology Division.
In addition to the data mentioned above, the Facility Division provided other data and product sets to outside groups, which included Doppler radar, upper- air soundings, Meteorological Aviation Reports (METARs), profiler, satellite imagery and soundings, and MAPS and LAPS grids. Operations staff served as liaison for outside users, providing them with information on system status, modifications, and upgrades. Figure 30 illustrates the NIMBUS-processed data and their distributions.
At approximately quarterly intervals, the Data Systems Group conducted the Central Facility task-prioritization meetings to ensure that FD development efforts responded to FSL's requirements. The FSL director, division chiefs, project leaders, and other interested parties were invited to review and discuss with the lead FD developers the status of all Central Facility tasks, including data acquisition, processing, storage, NIMBUS, and related facility development efforts. The main result of these meetings was a prioritized task list that was made available on the Web; this ensured that FD development activities were carried out in accordance with FSL management, project, and user requirements.
Division staff provided technical advice to FSL management on the optimal use
of laboratory computing and network resources, and participated on the
• Served on the FSL Technical Review Committee.
• Represented FSL on the ERL Technical Committee for Computing Resources (TCCR), was named as vice-chair of the TCCR, and served as chair of the TCCR High-Performance Computing Working Group.
• Served on the DOC Boulder Laboratories Network Working Group.
• Served as Core Team and Advisory Team members for selecting the recently funded FSL High-Performance Computing System.
Figure 30. Diagram of the NIMBUS-processed datasets and their distributions.
FD systems administrators provided extensive, ongoing support to other FSL divisions while these divisions were searching for replacement systems administrators. This effort allowed the divisions to accomplish their project tasks and stay on schedule.
The computer Operations staff supported the real-time Central Facility for 16 hours a day, seven days a week. They used FICS to monitor the data- acquisition systems, NIMBUS, and its associated hardware and software. They also monitored the three on-site WFO-Advanced workstations for proper operation of the data ingest. The operators took corrective action when problems occurred, rebooted machines and/or restarted software, as necessary, and referred unresolved problems to the appropriate systems administrators, network staff, or systems developers. The Operations staff had a significant role in shutting down and restoring the Central Facility during three major electrical power failures that occurred on 24 January, 21 June, and 17 July 1998.
The operators supported the FSL user community by answering facility-related questions, performing backups, restoring lost files and file systems, and providing data from the Mass Store, or the FSL tape library. They performed 138 file/file system/data restores and 29 special user-requested backups, serviced 14 outside retrospective data requests, created 12 WFO-Advanced/AWIPS software tapes, and wrote six CD-ROMS.
A large number of staff (especially from Systems Administration, Networking, and Operations) was involved in developing preliminary plans for the FSL move. The major focus of this effort related to the configuration and placement of FSL computer equipment in the new building, and the design and setup of the new computer network. In preparation for the move, the operators disposed of over 9,000 obsolete VAX/VMS magnetic tapes that were no longer readable with the current UNIX systems. (Management and user approvals were obtained before proceeding with this task.)
FD electronics technicians performed numerous tasks associated with equipment
setups, fiber and copper network cabling, PC support, and video
teleconferences. The technicians also provided audio-visual and other
necessary technical support for FSL technical reviews, conferences (such as
the American Meteorological Society Annual Meeting), workshops, and
The FSL Mass Store System (MSS) and associated Data Storage and Retrieval System (DSRS) will be maintained and enhanced. The MSS SGI control server operating system software will be upgraded to IRIX Version 6.5, and the UniTree hierarchical storage management software will be upgraded to Version 2.0. Staff will continue collaborative efforts with the vendor to work toward a resolution of the outstanding UniTree problems. DSRS will be further enhanced to provide FSL users easier access to the MSS.
Development of the FSL Data Repository (FDR) will continue. A requirements analysis and design for a flexible Oracle DBMS-based metadata information management system will be completed. The goal is to provide a comprehensive metadata database that will facilitate accurate and complete retrospective data processing. Staff will continue gathering metadata information for all data sources to be stored in the FDR.
Division staff will participate in a cooperative effort with UCAR COMET to automate the AWIPS case-study generation aspects of FDR. Also, staff will collaborate with NESDIS National Climatic Data Center (NCDC) to develop a prototype NOAAPORT data archive system, and to transfer this technology to the NCDC.
Enhancements will be made to the on-line storage capacity of the FSL Auspex Network File System (NFS) server. A second cabinet, storage processor, and disk drives will be added to increase the total storage capacity to 200 GB. The upgrade will ultimately allow expansion of the NFS server storage capacity to 1.5 TB with additional disk drives. The Redundant Array of Independent Disks (RAID) Level 5 technology will be introduced to facilitate more efficient and reliable use of the NFS server disks, and to provide improved access to on-line NIMBUS and NOAAPORT data on /public.
Development will begin on the FSL Hardware Assets Management System (HAMS), which will be based on an Oracle DBMS. HAMS will allow for the storage, maintenance, and retrieval of detailed records of each piece of FSL equipment and software. The system will contain vendor, warranty, and support contact information for each asset. Since it can be used for multiple levels of input, viewing, and searching, HAMS will facilitate the tracking of equipment moves, upgrades, and reconfigurations. It will provide management, technical support staff, and developers with vital statistics and attributes about FSL hardware and software, and also will provide accurate information to the FSL Office of Administration and Research for equipment and software maintenance. Platform-independent Web browsers, serving as the primary HAMS interface, will provide extensive query capabilities to satisfy a wide variety of day-to-day requests for asset information and maintenance. The initial phase of HAMS will be in place in time to document FSL hardware during the move to the new building.
The continued development of the Media Center is planned in order to provide reliable backups of the increasing number of FSL computers. A capability will be added for routine backups of small systems, such as PCs and Macintoshes. Division-specific backup classes will be set up to facilitate improved backup tracking and management. Another Media Center upgrade will include duplication of monthly backup tapes and their off-site storage.
Working with the WFO-Advanced development staff, FD will deploy a WFO-Advanced Data Server in the Central Facility. The Data Server will ingest NOAAPORT and local radar data, and will make these data available in netCDF format on /public in real time. The Central Facility Data Server will offload the Data Servers in the Systems Development and Modernizaton divisions, and will provide the data to several FSL development projects in the Aviation, Forecast Research, and International divisions.
Staff will continue exploring the Common Object Request Broker Architecture (CORBA) technology for upgrading NIMBUS. The objective is to enhance the Central Facility data-management and processing capabilities by replacing and enhancing the core NIMBUS communications software, the Information Transport. After selecting and testing a suitable Object Request Broker software package, extensive analysis, design, and development will be performed to implement the new technology in the production NIMBUS. Similar efforts will be made to replace the current Data Access Management (DAM) software with object-oriented technology. An analysis and design will be performed to replace DAM with an object-oriented [O]DAM that will provide homogeneous access to NIMBUS data. Also, the NIMBUS software will be systematically reviewed for Year 2000 compliance, and where necessary, it will be modified to allow for a successful Year 2000 transition.
The Facility Information and Control System (FICS) will be further enhanced by adding new capabilities such as the monitoring of Central Facility disk space and the WFO-Advanced Data Server. Information pages will be added for the new monitored items, and existing pages will be updated by the software developers to ensure quick and appropriate response to problems by the Operations staff.
Work will be initiated on a new version of the FICS software. The current system uses frames-based Web pages to display the wide range of needed monitoring information. However, occasionally FICS has encountered problems due to bugs and design flaws in the client Web browsers. A new version of the FICS software will be written in the Java language to provide a wider range of features on the client side. The server side of FICS also will be rewritten to provide improved scaling.
During the first quarter of Fiscal Year 1999, planning for the move will occupy most of the FSL Network Team's time. Prior to the move, they will establish simultaneous FSL network availability at both the new and current location. A cross-town DS-3 (45 Mbps) microwave link will provide network connectivity between the two buildings. Consequently, as the sequenced moves of the FSL divisions and groups occur, network connectivity will be available to the occupants of the new building as well as those still remaining in the old building.
In the new building, the FSL Network will be upgraded from primarily shared 10 Mbps Ethernet desktop workstation connectivity to switched (not-shared) 10 and 100 Mbps connectivity. No desktop machine within FSL will be sharing network bandwidth with any other desktop. This, of course, will also increase the demand on the new ATM backbone network. FSL will place a second, identical FORE Systems ASX-1000 ATM switch into production in the new building prior to the move. The second switch will be used to provide more redundancy within the FSL ATM infrastructure, and will provide additional ATM connections for hosts requiring high-speed network access.
An important feature of the new building is that almost all offices (not just the computer areas) have raised floors (Figure 31). This will make it much easier and more efficient to lay or modify data and power cabling throughout the building. Since frequent changes are made to the project requirements at this research center, the raised floors will prove to be a significant time saver.
Figure 31. Raised floors in the new FSL location, the David Skaggs Research Center, will make it easier to modify data and power cabling throughout the laboratory.
The Network Team will work with the NOAA-Boulder Network staff in taking necessary action to become a very high performance Backbone Network Service (vBNS) Partner Institution (vPI). This new status will provide FSL with access to the vBNS to communicate with other research and educational institutions at very high speeds using the vBNS OC-3 (155 Mbps) and OC-12 (622 Mbps) wide-area network. Initially, the University of Colorado and NCAR will need to agree to sponsor FSL's vPI status, and later, FSL plans to obtain its own dedicated access to the vBNS.
New technologies will continue to be explored and implemented for the FSL network. For example, three 800 dial-in lines will be installed for the use of FSL staff on travel duty. An assessment will be made of new remote access technologies to determine their utility in FSL, such as V.90 modems that provide up to 56 kbps download speeds to users in remote locations that are properly equipped with compatible modems. Another technology to be reviewed is the Digital Subscriber Line (DSL) service, now available from US West in Boulder and surrounding areas. The DSL service can provide up to 768 kbps speeds to remote locations at fairly low cost.
To improve data reliability and data identification, several enhancements will be made to the Central Facility NOAAPORT data-acquisition system software. Some hardware upgrades will be implemented to enhance overall system reliability and to allow acquisition of the fourth NOAAPORT channel, which will provide GOES Data Collection Platform data and non-GOES satellite imagery. The planned hardware upgrades include a second Planning Research Corporation NOAAPORT Receive System (NRS) Communications Processor, a new acquisition/fan-out processor, and the components necessary for bringing the fourth channel signal to the Communications Processor.
The installation of AWIPS at NWS Weather Forecast Offices around the country results in NOAAPORT taking over as the operational NWS data feed, replacing the AFOS data circuits. When AFOS is decommissioned, FSL will acquire buoy measurements (the last dataset obtained from AFOS) from several alternate sources, including NOAAPORT and WSI. The buoy translator will be redeveloped with improved capabilities, and the netCDF maker software will be enhanced to add new parameters and make the file format on /public comparable to the files provided on AWIPS. This change will also allow the decommissioning of several outdated VAX/VMS systems that were used to acquire AFOS data.
A new system to acquire data transmitted through X.25 circuits will be developed using a Simpact Freeway 2000 Communications Server. The two X.25 data sources currently used at FSL include the NWSTG Direct Connect Service and the Aeronautical Radio, Inc. (ARINC) Communications Addressing and Reporting System (ACARS) link. The Simpact system will handle the X.25 circuits and interface directly to the Open Systems-based NIMBUS. The last few VAX/VMS systems that serviced the X.25 lines will also be decommissioned once the Simpact system is operational.
NOAAPORT will replace Global Atmospherics, Inc., the long-time provider of national lightning data to FSL. Division staff will develop software to decode lightning data received from NOAAPORT and provide the data in netCDF format on /public. A new Process Manager configuration also will be implemented to create hourly receipt-based netCDF files in addition to the previous 5-minute receipt-based files.
To improve the availability and reliability of model grid data currently received from NCEP using ftp techniques, FD staff will participate in the Unidata Cooperative Opportunity for NCEP Data Using Internet Data Distribution (IDD) Technology (CONDUIT) project. The intent of FSL's participation in CONDUIT is to employ the vBNS network and LDM data distribution technology to acquire the NCEP grids and distribute them to other users in the Boulder area.
FD will perform the following additional data-acquisition tasks:
• Provide decoded Alaskan and Hawaiian AIRMETs in netCDF file format on /public.
• Add translation capabilities for new ACARS formats as necessary.
• Continue to perform general improvements in translation capabilities of all NIMBUS translators, as needed, including bug fixes and translation enhancements.
FD will perform the following data-support activities for specific FSL
• Acquire icing algorithm output from the NWS Aviation Weather Center and NCAR RAP, in support of Aviation Division verification activities.
• Provide turbulence algorithm output on /public from 11 algorithms run at NCAR RAP, in support of the Aviation Division turbulence verification exercise. An additional three turbulence algorithms will be integrated into the NIMBUS Process Manager and run in the Central Facility.
• Take on the responsibility for routinely running the MAPS 40-km model on the SGI Origin 2000.
Project, Research, and Laboratory Support
As mentioned earlier, a major FD effort will involve completing the plan for moving the Central Facility to the new NOAA building and carrying out the actual move. Facility downtime will be minimized along with the attendant data outage. The Network Team will ensure uninterrupted network availability in both locations.
The FSL Technical Support Focal Point will lead analyses and planning efforts for optimal use of Central Facility resources, including computers and mass-store systems. Data, product, and user storage requirements will be analyzed and coordinated to ensure that adequate Central Facility disk and tape storage space will be available to FSL users and projects.
To assist other FSL divisions and projects with the Year 2000 issues, FD will provide netCDF data files with appropriate file names and internal time values to test Year 2000 software compliance. FSL users will be encouraged to use these data to validate their software and ensure that the software will successfully handle the Year 2000 transition.
Staff will continue to provide information on FSL Central Facility capabilities to FSL and outside users, including the NWS, FAA, UCAR, NCAR, universities, and other Environmental Research Laboratories and NOAA offices. Their requests for support, advice, and data will be coordinated with appropriate staff. Following FSL management approval, FD will provide real-time and retrospective data to researchers at these organizations.
Collaboration with the Hungarian Meteorological Service (HMS) will continue. An HMS computer scientist will visit FSL during the summer of 1999, and a senior FSL NIMBUS developer will visit HMS in the fall to exchange experiences with NIMBUS and meteorological data management.
Organizational changes will include rebuilding the FD Systems Administration staff and better integrating that group with the Operations staff to enhance the productivity of both groups. The Operations staff will be pursuing further training both in-house and outside to help accomplish this end.