TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining and Object Detection in Rain

1 CMU, 2 UIUC
Accepted to WACV 2024

Backgrounds: Impact of Rain on Object Detection

Challenges: Current I2I Methods Fail to Generate High-Quality Rainy Images for Finetuning

Model Workflow

Our model, TPSeNCE, leverages two key innovations:

  • Triangular Probability Similarity (TPS): Minimizes artifacts and distortions by aligning generated rainy images with clear and rainy images in the discriminator manifold.
  • Semantic Noise Contrastive Estimation (SeNCE): Optimizes the amounts of generated rain by adjusting the pushing force of negative patches based on feature similarities with the anchor patch and refining that force with the semantic similarity between clear and rainy images.

These contributions enable realistic rain generation, benefiting deraining and object detection in real rainy conditions.

InstanceWarp for Domain Adaptation.

Model Architecture

TPSeNCE uses a generator to translate clear images to rainy ones, a discriminator with TPS and GAN losses, and an encoder that embeds patches from both clear and generated images. MLPs process these patches contrastively to output SeNCE loss, guided by semantic segmentation maps.

InstanceWarp for Domain Adaptation.

Semantic Noise Contrastive Estimation (SeNCE)

SeNCE outperforms PatchNCE and MoNCE in optimizing the amount of rain to produce realistic rainy images. The length of the arrow here represents the magnitude of the NCE losses.

InstanceWarp for Domain Adaptation.

Visual Results (Clear2Rainy Video)

Visual Results (Clear2Rainy)

Visual Results (Rainy2Clear)

Visual Results (Clear2Snowy)

Visual Results (Day2Night)

Visual Results (Object Detection in Rain)