Boosting Image Quality Assessment through Semi-Supervised and Positive-Unlabeled Learning
Incorporating Semi-Supervised and Positive-Unlabeled Learning methods enhances full-reference image quality assessment using less expensive unlabeled data and exclusion of negative samples. The framework involves PU learning with CE and NE losses, as well as SSL with MSE loss for labeled data and pseudo MOS for positive unlabeled data. The JSPL Model optimizes SSL and PU learning modules jointly, updating the binary classifier and incorporating positive unlabeled samples. The LocalSW distance and experiment settings with labeled and unlabeled data sources are also discussed.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Incorporating Semi-Supervised and Positive-Unlabeled Learning for Boosting Full Reference Image Quality Assessment CVPR 2022
Motivation Semi-supervised learning (SSL) - It can use less expensive and easily accessible unlabeled data Positive-unlabeled (PU) learning - The unlabeled data often has a different distribution from labeled data - PU learning can find and exclude negative samples for SSL
Framework and Notations : input two-tuple : ground truth MOS : learned quality mapping : positive labeled data : unlabeled data : binary classifier
PU learning For a positive sample, we simply adopt the cross-entropy (CE) loss Each unlabeled sample should be either positive or negative sample To prevent the classifier from producing 1 for all samples, we introduce a negative- enforcing (NE) loss for constraining that there is at least one negative sample in each mini-batch Total PU learning loss:
Semi-supervised learning For labeled data, we adopt the MSE loss For unlabeled data, we only consider the positive unlabeled samples where denotes the pseudo MOS for We obtain by the moving average strategy Total SSL loss:
JSPL Model We jointly optimize both the SSL and PU learning module: Training steps: 1. update the binary classifier 2. pseudo MOS is updated for each unlabeled sample 3. the positive unlabeled samples are incorporated with the mini-batch of labeled samples to update the FR-IQA network
Local sliced Wasserstein (LocalSW) distance 1. Divide feature maps into patches 2. Pass into a projection operator 3. Sort and calculate difference for each channel to form the LocalSW distance map
Experiment settings Labeled Data LIVE, CSIQ, TID2013, KADID-10k and PIPAL Unlabeled Data 1000 patches from DIV2K and Flickr2K as reference images ESRGAN Synthesis, 50 distorted images for each reference image DnCNN Synthesis, 50 for each reference image KADID-10k Synthesis, 100 for each reference image
Comments Pros The writing is good The motivation and experiments are convincing Cons no experimental results for the classifier