TAC-GAN – Text Conditioned Auxiliary Classifier Generative Adversarial Network
1.What is this paper about?
It proposes the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network for synthesizing images from their text descriptions.
2.What’s better than previous paper?
In this aproach, It use the class information to be able to synthesize images more diverse and improving their structural coherence.
3.What are important parts of technique and methods?
The use of AC-GAN made it possible to generate images with diversity. Taking advantage of AC-GAN’s property of outputting diversity images when the input is a class, it can synthesize diversity images when the input is text description.
4.How did they verify it?
It valid on the Oxford-102 dataset, comparing with three state-of-the-art approaches (StackGAN) using inception score (IS) and Multi-Scale Structural Similarity (MS-SSIM) as evaluation metrics.
It show slightly better result to those of other state of the art model.
Next paper
A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.