Generative Adversarial Networks
GANs (mainly in image synthesis)
Last updated
Was this helpful?
GANs (mainly in image synthesis)
Last updated
Was this helpful?
Are GANs Created Equal? A Large-Scale Study
Which Training Methods for GANs do actually Converge?
A Large-Scale Study on Regularization and Normalization in GANs
: TensorFlow-GAN
: IS, FID implementation in TF, PyTorch
Vanilla GAN
EBGAN
LSGAN
WGAN
BEGAN
Hinge Loss
Assumption
MEANINGFUL: The generated image should be clear, the output probability of a classifier network should be [0.9, 0.05, ...] (largely skewed to a class). is of low entropy.
DIVERSITY: If we have 10 classes, the generated image should be averagely distributed. So that the marginal distribution __is of high entropy.
Better models: KL Divergence of and should be high.
Formulation
where
is sampled from generated data
is the output probability of Inception v3 when input is
is the average output probability of all generated data (from InceptionV3, 1000-dim vector)
, where is the dimension of the output probability.
Reference
Formulation
where
and are the 2048-dim activations the Inception v3 pool3 layer
is the mean of real photo's feature
is the mean of generated photo's feature
is the covariance matrix of real photo's feature
is the covariance matrix of generated photo's feature
Reference
Gradient Penalty
DRAGAN
SNGAN
Consistency Regularization
Deep Convolution GAN (DCGAN)
Progressive Growing of GANs (PGGAN)
Self Attention GAN (SAGAN)
BigGAN
Style based Generator (StyleGAN)
Mapping Network (StyleGAN)
LOGAN: Latent Optimisation for Generative Adversarial Networks
Vanilla Conditional GANs
Auxiliary Classifer GAN (ACGAN)
Two time-scale update rule (TTUR)
Self-Supervised GANs via Auxiliary Rotation Loss (SS-GAN)
Inception Score
Official TF implementation is in
Pytorch Implementation:
TF seemed to provide a
FID Score
is
Official TF implementation:
Pytorch Implementation:
TF seemed to provide a