Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.
ASJC Scopus subject areas
- Atmospheric Science