TY - JOUR A1 - Behrens, Gunnar A1 - Beucler, Tom A1 - Gentine, Pierre A1 - Iglesias‐Suarez, Fernando A1 - Pritchard, Michael A1 - Eyring, Veronika T1 - Non‐Linear Dimensionality Reduction With a Variational Encoder Decoder to Understand Convective Processes in Climate Models Y1 - 2022-08-13 VL - 14 IS - 8 JF - Journal of Advances in Modeling Earth Systems DO - 10.1029/2022MS003130 PB - N2 - Deep learning can accurately represent sub‐grid‐scale convective processes in climate models, learning from high resolution simulations. However, deep learning methods usually lack interpretability due to large internal dimensionality, resulting in reduced trustworthiness in these methods. Here, we use Variational Encoder Decoder structures (VED), a non‐linear dimensionality reduction technique, to learn and understand convective processes in an aquaplanet superparameterized climate model simulation, where deep convective processes are simulated explicitly. We show that similar to previous deep learning studies based on feed‐forward neural nets, the VED is capable of learning and accurately reproducing convective processes. In contrast to past work, we show this can be achieved by compressing the original information into only five latent nodes. As a result, the VED can be used to understand convective processes and delineate modes of convection through the exploration of its latent dimensions. A close investigation of the latent space enables the identification of different convective regimes: (a) stable conditions are clearly distinguished from deep convection with low outgoing longwave radiation and strong precipitation; (b) high optically thin cirrus‐like clouds are separated from low optically thick cumulus clouds; and (c) shallow convective processes are associated with large‐scale moisture content and surface diabatic heating. Our results demonstrate that VEDs can accurately represent convective processes in climate models, while enabling interpretability and better understanding of sub‐grid‐scale physical processes, paving the way to increasingly interpretable machine learning parameterizations with promising generative properties. N2 - Plain Language Summary: Deep neural nets are hard to interpret due to their hundred thousand or million trainable parameters without further postprocessing. We demonstrate in this paper the usefulness of a network type that is designed to drastically reduce this high dimensional information in a lower‐dimensional space to enhance the interpretability of predictions compared to regular deep neural nets. Our approach is, on the one hand, able to reproduce small‐scale cloud related processes in the atmosphere learned from a physical model that simulates these processes skillfully. On the other hand, our network allows us to identify key features of different cloud types in the lower‐dimensional space. Additionally, the lower‐order manifold separates tropical samples from polar ones with a remarkable skill. Overall, our approach has the potential to boost our understanding of various complex processes in Earth System science. N2 - Key Points: A Variational Encoder Decoder (VED) can predict sub‐grid‐scale thermodynamics from the coarse‐scale climate state. The VED's latent space can distinguish convective regimes, including shallow/deep/no convection. The VED's latent space reveals the main sources of convective predictability at different latitudes. UR - http://resolver.sub.uni-goettingen.de/purl?gldocs-11858/10329 ER -