Generative Adversarial Networks
GANs (mainly in image synthesis)
Survey Papers / Repos
Are GANs Created Equal? A Large-Scale Study [1711.10337]
Which Training Methods for GANs do actually Converge? [1801.04406]
A Large-Scale Study on Regularization and Normalization in GANs [1807.04720]
Resources
TF-GAN: TensorFlow-GAN
lzhbrian/metrics: IS, FID implementation in TF, PyTorch
Models
Loss functions
Vanilla GAN [1406.2661]
EBGAN [1609.03126]
LSGAN [1611.04076]
WGAN [1701.07875]
BEGAN [1703.10717]
Hinge Loss [1705.02894]
Regularization
Gradient Penalty [1704.00028]
DRAGAN [1705.07215]
SNGAN [1802.05957]
Consistency Regularization [1910.12027]
Architecture
Deep Convolution GAN (DCGAN) [1511.06434]
Progressive Growing of GANs (PGGAN) [1710.10196]
Self Attention GAN (SAGAN) [1805.08318]
BigGAN [1809.11096]
Style based Generator (StyleGAN) [1812.04948]
Mapping Network (StyleGAN) [1812.04948]
LOGAN: Latent Optimisation for Generative Adversarial Networks [1912.00953]
Conditional GANs
Vanilla Conditional GANs [1411.1784]
Auxiliary Classifer GAN (ACGAN) [1610.09585]
Others
Tricks
Two time-scale update rule (TTUR) [bioinf-jku/TTUR] [1706.08500]
Self-Supervised GANs via Auxiliary Rotation Loss (SS-GAN) [1811.11212]
Metrics (my implementation: lzhbrian/metrics)
Inception Score [1606.03498] [1801.01973]
Assumption
Formulation
where
Reference
Official TF implementation is in openai/improved-gan
Pytorch Implementation: sbarratt/inception-score-pytorch
TF seemed to provide a good implementation
FID Score [1706.08500]
Formulation
where
Reference
Official TF implementation: bioinf-jku/TTUR
Pytorch Implementation: mseitzer/pytorch-fid
TF seemed to provide a good implementation
Last updated