Methods for Comparing Simulated and Observed Satellite Infrared Brightness Temperatures and What Do They Tell Us?
In this study, the utility of dimensioned, neighborhood-based, and object-based forecast verification metrics for cloud verification is assessed using output from the experimental High Resolution Rapid Refresh (HRRRx) model over a 1-day period containing different modes of convection. This is accomplished by comparing observed and simulated Geostationary Operational Environmental Satellite (GOES) 10.7-μm brightness temperatures (BTs). Traditional dimensioned metrics such as mean absolute error (MAE) and mean bias error (MBE) were used to assess the overall model accuracy. The MBE showed that the HRRRx BTs for forecast hours 0 and 1 are too warm compared with the observations, indicating a lack of cloud cover, but rapidly become too cold in subsequent hours because of the generation of excessive upper-level cloudiness. Neighborhood and object-based statistics were used to investigate the source of the HRRRx cloud cover errors. The neighborhood statistic fractions skill score (FSS) showed that displacement errors between cloud objects identified in the HRRRx and GOES BTs increased with time. Combined with the MBE, the FSS distinguished when changes in MAE were due to differences in the HRRRx BT bias or displacement in cloud features. The Method for Object-Based Diagnostic Evaluation (MODE) analyzed the similarity between HRRRx and GOES cloud features in shape and location. The similarity was summarized using the newly defined MODE composite score (MCS), an area-weighted calculation using the cloud feature match value from MODE. Combined with the FSS, the MCS indicated if HRRRx forecast error is the result of cloud shape, since the MCS is moderately large when forecast and observation objects are similar in size.