Contextual GAN for Image Generation from Sketch Constraint

Slide Note
Embed
Share

Utilizing contextual GAN, this project aims to automatically generate photographic images from hand-sketched objects. It addresses the challenge of aligning output with free-hand sketches while offering advantages like a unified network for sketch-image understanding. The process involves posing image generation as an image completion problem with weak sketch constraints, enabling varied poses and shapes beyond input sketches. By leveraging contextual loss and back-propagation, the generated joint image best resembles the input sketch.


Uploaded on Oct 06, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Image Generation from Sketch Constraint Using Contextual GAN :

  2. Introduction(1/4) Goal: Our goal of sketch-to-image generation is to automatically generate a photographic image of the hand-sketched object.

  3. Introduction(2/4) Problem of other work: They make the output strictly align with the input edge. This can be highly problematic in sketch-to-image generation when the input is a free-hand sketch.

  4. Introduction(3/4) We pose the image generation problem as an image completion problem, with sketch providing a weak contextual constraint.

  5. Introduction(4/4) Advantages: there are no separate domains for image and sketch learning; only one network is used to understand a joint sketch-image pair which is a single image. By using weak sketch constraint, the generated image may exhibit different poses and shapes beyond the input sketches which may not strictly correspond to photographic objects. From the joint image s point of view, image and sketch are of no difference, so they can be swapped to provide the context for completion for the other.

  6. GAN spectrum in image generation task

  7. Contextual GAN

  8. Compute z by objective function To get the closest mapping of the corrupted joint image and the reconstructed joint image, we need to search for a generated joint image G( z) in which the sketch portion best resembles the input sketch.

  9. Compute z by objective function Contextual Loss: Perceptual Loss: The objective function for z is then the weighted sum of the two losses:

  10. Compute z by objective function Projection through Back Propagation we compute the z vector that minimizes our objective function. This means that we are projecting the corrupted input onto the z space of the generator through the iterative back propagation.

  11. Compute z by objective function Initialization: If the initialized sketch portion of G(z) perceptually exhibits a large gap from the input sketch, it will be hard for the corrupted image to be mapped to the closest z in the latent space with gradient descent. To address this problem, we sample N uniformly random noise vectors, and obtain their respective initialized sketches. compute the pairwise KL-divergence between the input sketch and these N initialized sketches. The one which gives the lowest KL-divergence represents the best initialization among the N samples and will be used as the initialized sketch.

  12. Dataset Three dataset are used CelebA dataset (contains around 200K images for face category) CUB-200-2011 dataset (contains around 11.7K images for bird category) Stanford s Cars Dataset(contains around 16K images for car category)

  13. Dataset We create three style sketch for every category.

  14. Training Detail Kernel size: 5 Learning rate: 2e-4 for XDog style, 1e-5 for other style z is 100-D random noise vector, uniformly sampled from 1 to 1 Batch size: 64 Epoch: 200 for every style

  15. Results Results on CelebA dataset in three sketch styles

  16. Results Results on CUB-200-2011 dataset in three sketch styles

  17. Results Results on Stanford s Cars dataset in three sketch styles

  18. Comparison

  19. Comparison Use SSIM and Verification Accuracy to compare results with other work.

  20. Bi-directional Generation This image shows some convincing results on generating a sketch from a photographic image

  21. Limitations we hope the generated image can preserve the identity of the input sketch. However, due to the nature of freehand sketch, there is no guarantee of identity-preserving face generation given the sparse visual content. Also, it may fail to identify some kinds of attributes associated with the input.

Related


More Related Content