## Training examples ### Flowers DDPM The command to train a DDPM UNet model on the Oxford Flowers dataset: ```bash python -m torch.distributed.launch \ --nproc_per_node 4 \ train_ddpm.py \ --dataset="huggan/flowers-102-categories" \ --resolution=64 \ --output_path="flowers-ddpm" \ --batch_size=16 \ --num_epochs=100 \ --gradient_accumulation_steps=1 \ --lr=1e-4 \ --warmup_steps=500 \ --mixed_precision=no ``` A full ltraining run takes 2 hours on 4xV100 GPUs. ### Pokemon DDPM The command to train a DDPM UNet model on the Pokemon dataset: ```bash python -m torch.distributed.launch \ --nproc_per_node 4 \ train_ddpm.py \ --dataset="huggan/pokemon" \ --resolution=64 \ --output_path="pokemon-ddpm" \ --batch_size=16 \ --num_epochs=100 \ --gradient_accumulation_steps=1 \ --lr=1e-4 \ --warmup_steps=500 \ --mixed_precision=no ``` A full ltraining run takes 2 hours on 4xV100 GPUs.