StyleGAN

Concepts: Equalized Learning Rate Weight Initialization

Reference:

If we were willing to sacrifice scale-specific controls (see video), we could simply remove the normalization, thus removing the artifacts and also improving FID slightly

Our demodulation is also related to weight normalization [37] that performs the same calculation as a part of reparameterizing the weight tensor. Prior work has identified weight normalization as beneficial in the context of GAN training [43].

  • Trainig
    • https://github.com/l4rz/practical-aspects-of-stylegan2-training