Are GANs Created Equal? A Large-Scale Study [1711.10337]
Which Training Methods for GANs do actually Converge? [1801.04406]
A Large-Scale Study on Regularization and Normalization in GANs [1807.04720]
Vanilla GAN [1406.2661]
Hinge Loss [1705.02894]
Gradient Penalty [1704.00028]
Consistency Regularization [1910.12027]
Deep Convolution GAN (DCGAN) [1511.06434]
Progressive Growing of GANs (PGGAN) [1710.10196]
Self Attention GAN (SAGAN) [1805.08318]
Style based Generator (StyleGAN) [1812.04948]
Mapping Network (StyleGAN) [1812.04948]
LOGAN: Latent Optimisation for Generative Adversarial Networks [1912.00953]
Self-Supervised GANs via Auxiliary Rotation Loss (SS-GAN) [1811.11212]
MEANINGFUL: The generated image should be clear, the output probability of a classifier network should be [0.9, 0.05, ...] (largely skewed to a class). is of low entropy.
DIVERSITY: If we have 10 classes, the generated image should be averagely distributed. So that the marginal distribution is of high entropy.
Better models: KL Divergence of and should be high.
is sampled from generated data
is the output probability of Inception v3 when input is
is the average output probability of all generated data (from InceptionV3, 1000-dim vector)
, where is the dimension of the output probability.
FID Score [1706.08500]
and are the 2048-dim activations the Inception v3 pool3 layer
is the mean of real photo's feature
is the mean of generated photo's feature
is the covariance matrix of real photo's feature
is the covariance matrix of generated photo's feature