Boosting Image Quality Assessment through Semi-Supervised and Positive-Unlabeled Learning

Slide Note
Embed
Share

Incorporating Semi-Supervised and Positive-Unlabeled Learning methods enhances full-reference image quality assessment using less expensive unlabeled data and exclusion of negative samples. The framework involves PU learning with CE and NE losses, as well as SSL with MSE loss for labeled data and pseudo MOS for positive unlabeled data. The JSPL Model optimizes SSL and PU learning modules jointly, updating the binary classifier and incorporating positive unlabeled samples. The LocalSW distance and experiment settings with labeled and unlabeled data sources are also discussed.


Uploaded on Dec 05, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Incorporating Semi-Supervised and Positive-Unlabeled Learning for Boosting Full Reference Image Quality Assessment CVPR 2022

  2. Motivation Semi-supervised learning (SSL) - It can use less expensive and easily accessible unlabeled data Positive-unlabeled (PU) learning - The unlabeled data often has a different distribution from labeled data - PU learning can find and exclude negative samples for SSL

  3. Framework and Notations : input two-tuple : ground truth MOS : learned quality mapping : positive labeled data : unlabeled data : binary classifier

  4. PU learning For a positive sample, we simply adopt the cross-entropy (CE) loss Each unlabeled sample should be either positive or negative sample To prevent the classifier from producing 1 for all samples, we introduce a negative- enforcing (NE) loss for constraining that there is at least one negative sample in each mini-batch Total PU learning loss:

  5. Semi-supervised learning For labeled data, we adopt the MSE loss For unlabeled data, we only consider the positive unlabeled samples where denotes the pseudo MOS for We obtain by the moving average strategy Total SSL loss:

  6. JSPL Model We jointly optimize both the SSL and PU learning module: Training steps: 1. update the binary classifier 2. pseudo MOS is updated for each unlabeled sample 3. the positive unlabeled samples are incorporated with the mini-batch of labeled samples to update the FR-IQA network

  7. Network structure

  8. Local sliced Wasserstein (LocalSW) distance 1. Divide feature maps into patches 2. Pass into a projection operator 3. Sort and calculate difference for each channel to form the LocalSW distance map

  9. Experiment settings Labeled Data LIVE, CSIQ, TID2013, KADID-10k and PIPAL Unlabeled Data 1000 patches from DIV2K and Flickr2K as reference images ESRGAN Synthesis, 50 distorted images for each reference image DnCNN Synthesis, 50 for each reference image KADID-10k Synthesis, 100 for each reference image

  10. Ablation study

  11. Performance comparison

  12. Generalization ability

  13. Comments Pros The writing is good The motivation and experiments are convincing Cons no experimental results for the classifier

Related


More Related Content