TR2025-009

Rotation-Equivariant Neural Networks for Cloud Removal from Satellite Images


    •  Lohit, S., Marks, T.K., "Rotation-Equivariant Neural Networks for Cloud Removal from Satellite Images", Asilomar Conference on Signals, Systems, and Computers (ACSSC), January 2025.
      BibTeX TR2025-009 PDF
      • @inproceedings{Lohit2025jan,
      • author = {Lohit, Suhas and Marks, Tim K.}},
      • title = {Rotation-Equivariant Neural Networks for Cloud Removal from Satellite Images},
      • booktitle = {Asilomar Conference on Signals, Systems, and Computers (ACSSC)},
      • year = 2025,
      • month = jan,
      • url = {https://www.merl.com/publications/TR2025-009}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

In this paper, we aim to recover a cloud-free optical image from a cloudy optical image and aligned synthetic aperture radar (SAR) image using a deep neural network. In contrast to previous approaches, we make the observation that satellite image features generally have no preferred orientation. This insight can be incorporated into the design of the neural architecture by making the network layers obey the geometric constraint that changing the orientation of an input image should only change the orientation of the corresponding output image, without otherwise affecting the quality or details of the reconstruction. We build a multimodal rotation-equivariant neural network, called EquiCR (Equivariant Cloud Removal), that encodes this geometric prior exactly. When trained on the public SEN12MSCR dataset, we observe improvements in reconstructed image quality using EquiCR, compared to using deep learning without built-in rotation equivariance. Interestingly, EquiCR results in greater improvements over the baseline method in the more difficult cases—when the amount of cloud cover is high or when the training dataset is small.