Jam, Jireh ORCID: https://orcid.org/0000-0003-2309-4655, Kendrick, Connah, Drouard, Vincent, Walker, Kevin, Hsu, Gee-Sern and Yap, Moi Hoon ORCID: https://orcid.org/0000-0001-7681-4287 (2021) R-MNet: a perceptual adversarial network for image inpainting. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 3 January 2021 -8 January 2021, Waikoloa, Hawaii, USA.
|
Accepted Version
Available under License In Copyright. Download (2MB) | Preview |
Abstract
Facial image inpainting is a problem that is widely studied, and in recent years the introduction of Generative Adversarial Networks, has led to improvements in the field. Unfortunately some issues persists, in particular when blending the missing pixels with the visible ones. We address the problem by proposing a Wasserstein GAN combined with a new reverse mask operator, namely Reverse Masking Network (R-MNet), a perceptual adversarial network for image inpainting. The reverse mask operator transfers the reverse masked image to the end of the encoder-decoder network leaving only valid pixels to be inpainted. Additionally, we propose a new loss function computed in feature space to target only valid pixels combined with adversarial training. These then capture data distributions and generate images similar to those in the training data with achieved realism (realistic and coherent) on the output images. We evaluate our method on publicly available dataset, and compare with state-of-the-art methods. We show that our method is able to generalize to high-resolution inpainting task, and further show more realistic outputs that are plausible to the human visual system when compared with the state-of-the-art methods. https://github.com/Jireh-Jam/R-MNet-Inpainting-keras
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.