Deep Learning Based Cloud Cover Parameterization for ICON
DOI: https://doi.org/10.1029/2021MS002959
Persistent URL: http://resolver.sub.uni-goettingen.de/purl?gldocs-11858/11260
Persistent URL: http://resolver.sub.uni-goettingen.de/purl?gldocs-11858/11260
Supplement: https://github.com/agrundner24/iconml_clc, https://doi.org/10.5281/zenodo.5788873, https://code.mpimet.mpg.de/projects/iconpublic
Grundner, Arthur; Beucler, Tom; Gentine, Pierre; Iglesias‐Suarez, Fernando; Giorgetta, Marco A.; Eyring, Veronika, 2022: Deep Learning Based Cloud Cover Parameterization for ICON. In: Journal of Advances in Modeling Earth Systems, Band 14, 12, DOI: 10.1029/2021MS002959.
|
View/
|
A promising approach to improve cloud parameterizations within climate models and thus climate projections is to use deep learning in combination with training data from storm‐resolving model (SRM) simulations. The ICOsahedral Non‐hydrostatic (ICON) modeling framework permits simulations ranging from numerical weather prediction to climate projections, making it an ideal target to develop neural network (NN) based parameterizations for sub‐grid scale processes. Within the ICON framework, we train NN based cloud cover parameterizations with coarse‐grained data based on realistic regional and global ICON SRM simulations. We set up three different types of NNs that differ in the degree of vertical locality they assume for diagnosing cloud cover from coarse‐grained atmospheric state variables. The NNs accurately estimate sub‐grid scale cloud cover from coarse‐grained data that has similar geographical characteristics as their training data. Additionally, globally trained NNs can reproduce sub‐grid scale cloud cover of the regional SRM simulation. Using the game‐theory based interpretability library SHapley Additive exPlanations, we identify an overemphasis on specific humidity and cloud ice as the reason why our column‐based NN cannot perfectly generalize from the global to the regional coarse‐grained SRM data. The interpretability tool also helps visualize similarities and differences in feature importance between regionally and globally trained column‐based NNs, and reveals a local relationship between their cloud cover predictions and the thermodynamic environment. Our results show the potential of deep learning to derive accurate yet interpretable cloud cover parameterizations from global SRMs, and suggest that neighborhood‐based models may be a good compromise between accuracy and generalizability. Plain Language Summary:
Climate models, such as the ICOsahedral Non‐hydrostatic climate model, operate on low‐resolution grids, making it computationally feasible to use them for climate projections. However, physical processes –especially those associated with clouds– that happen on a sub‐grid scale (inside a grid box) cannot be resolved, yet they are critical for the climate. In this study, we train neural networks that return the cloudy fraction of a grid box knowing only low‐resolution grid‐box averaged variables (such as temperature, pressure, etc.) as the climate model sees them. We find that the neural networks can reproduce the sub‐grid scale cloud fraction on data sets similar to the one they were trained on. The networks trained on global data also prove to be applicable on regional data coming from a model simulation with an entirely different setup. Since neural networks are often described as black boxes that are therefore difficult to trust, we peek inside the black box to reveal what input features the neural networks have learned to focus on and in what respect the networks differ. Overall, the neural networks prove to be accurate methods of reproducing sub‐grid scale cloudiness and could improve climate model projections when implemented in a climate model. Key Points:
Neural networks can accurately learn sub‐grid scale cloud cover from realistic regional and global storm‐resolving simulations.
Three neural network types account for different degrees of vertical locality and differentiate between cloud volume and cloud area fraction.
Using a game theory based library we find that the neural networks tend to learn local mappings and are able to explain model errors.
Statistik:
View StatisticsCollection
This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.