Underwater Image Enhancement by Diffusion Model with Customized CLIP-Classifier

 

 

Abstract

Underwater Image Enhancement (UIE) aims to improve the visual quality of a low-quality input. Unlike other image enhancement tasks, underwater images suffer from the unavailability of real reference images. Although existing works select well-enhanced images of various approaches as reference images to train enhancement networks, their upper performance bound is limited by the synthetic reference domain. To address this challenge, we propose CLIP-UIE, a novel multimodal framework that leverages the potential of Contrastive Language-Image Pretraining (CLIP) for the UIE task. Specifically, we first employ color transfer to yield a large-scale synthetic underwater dataset and utilize this dataset to activate an image-to-image diffusion model. Then, we combine the prior knowledge of the in-air natural domain with CLIP to train an explicit CLIP-Classifier. Subsequently, we integrate this CLIP-Classifier with UIE benchmark datasets to jointly further train/fine-tune the pre-trained image-to-image diffusion model, guiding the enhancement results towards the in-air natural domain beyond the boundaries of the synthetic reference domain. Additionally, for image enhancement tasks, we observe that both the image-to-image diffusion model and the CLIP-Classifier, during fine-tuning, primarily focus on the high-frequency intermediate variable $x_{t}$, where the injected random noise smooths its high-frequency information while preserving its low-frequency information. Therefore, we propose a new fine-tuning strategy that specifically targets the high-frequency intermediate variable sequences, which can be up to 10 times faster than traditional strategies. Extensive experiments demonstrate that our method exhibits a more natural appearance.

 

[Paper] [Code]

 

Highlights

  1. We propose to use color transfer to degrade 500k in-air natural images from INaturalist into underwater images guided by the real underwater domain, overcoming subjective preferences introduced by manual selection of reference images. The pre-trained image-to-image diffusion model is trained from scratch on this synthetic dataset.

  2. We propose a CLIP-Classifier that inherits prior knowledge of the in-air natural domain, and then combine it with custom datasets to jointly fine-tune the diffusion model to mitigate catastrophic forgetting and mode collapse. Experiments and ablation studies verify the good performance of the proposed CLIP-UIE and it breaks the limitations of the reference domain to a certain extent.

  3. We find that for image enhancement tasks, which require consistent content, the image-to-image diffusion model and the CLIP-Classifier mainly act in high-frequency regions. Therefore, we propose a new fine-tuning strategy that acts only on high-frequency regions, which significantly improves the fine-tuning speed, even up to 10 times.


Pipeline of CLIP-UIE

 

 

Fig 1. The preparation for the pre-trained model. (a) Randomly select template A from the template pool (underwater domain). Then, the Color Transfer module, guided by template A, degrades a in-air natural image from INaturalist into underwater domain, constructing paired datasets for training image-to-image diffusion model. (b) The image-to-image diffusion model SR3 is trained to learn the prior knowledge, the mapping from the real underwater degradation domain and the real in-air natural domain, and to generate the corresponding enhancement results under the condition of the input synthetic underwater images produced by Color Transfer.

 

 


New fine-tuning strategy

 

 

Fig 2. New fine-tuning strategy. We first adopt a single classifier guidance strategy to guide the reverse diffusion process. Then, we switch to the multi-classifier guidance strategy. With multi-condition guidance, the intermediate results are constrained by both the natural and reference domains, mitigating the damage of fine-tuning to the prior knowledge of the pre-trained model.

 

 


Prompt Learning

 

 

Fig 3. Illustration of the prompt learning for CLIP-Classifier. (A) Prompt Initialization. Given two text prompts describing the in-air natural image and underwater image. We encode each text and get the initial in-air natural image prompt and the initial underwater image prompt. (B) Prompt Training. We use the cross-entropy loss to constrain the learnable prompts, aligning learnable prompts and images in the CLIP latent space, by maximizing the cosine similarity for matched pairs. The base model of CLIP is frozen throughout.

 


Qualitative analysis of the effectiveness of the CLIP-Classifier

 

 

Fig 4. Qualitative analysis of the effectiveness of the CLIP-Classifier. (a) CLIP score of the intermediate variable of the image versus time. The time step is set to 2000. The CLIP score curves of the raw and reference images are plotted separately. (b) We calculate the difference in CLIP score between the raw and reference images, then plot the difference curve and label several key time points on the curve.

 


Enhancement Results

 

 

Fig 5. Visual comparisons on underwater images from T200 dataset.

 

 

Fig 6. Visual comparisons on challenging underwater images from C60 dataset.

 

 

Fig 7. Visual comparisons on challenging underwater images from SQUID dataset.

 

 

Fig 8. Visual comparisons on underwater images from Color-Checker7 dataset.

 


Citation