* add total number checkpoints to training scripts
* Update examples/dreambooth/train_dreambooth.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Log Unconditional Image Generation Samples to WandB
* Check for wandb installation and parity between onnxruntime script
* Log epoch to wandb
* Check for tensorboard logger early on
* style fixes
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Resolves ValueError: `num_inference_steps`: 1000 cannot be larger than `self.config.train_timesteps`: 50 as the unet model trained with this scheduler can only handle maximal 50 timesteps.
* EMA: fix `state_dict()` & add `cur_decay_value`
* EMA: fix a bug in `load_state_dict()`
'float' object (`state_dict["power"]`) has no attribute 'get'.
* del train_unconditional_ort.py
* better accelerated saving
* up
* finish
* finish
* uP
* up
* up
* fix
* Apply suggestions from code review
* correct ema
* Remove @
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix torchvision.transforms and transforms function naming clash
* Update unconditional script for onnx
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* improve EMA
* style
* one EMA model
* quality
* fix tests
* fix test
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* re organise the unconditional script
* backwards compatibility
* default to init values for some args
* fix ort script
* issubclass => isinstance
* update state_dict
* docstr
* doc
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* use .to if device is passed
* deprecate device
* make flake happy
* fix typo
Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add state checkpointing to other training scripts
* Fix first_epoch
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update Dreambooth checkpoint help message.
* Dreambooth docs: checkpoints, inference from a checkpoint.
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add check_min_version for examples
* move __version__ to the top
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix comment
* fix error_message
* adapt the install message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Adapt ddpm, ddpmsolver to prediction_type.
* Deprecate predict_epsilon in __init__.
* Bring FlaxDDIMScheduler up to date with DDIMScheduler.
* Set prediction_type as an ivar for consistency.
* Convert pipeline_ddpm
* Adapt tests.
* Adapt unconditional training script.
* Adapt BitDiffusion example.
* Add missing kwargs in dpmsolver_multistep
* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.
* make style
* Remove import no longer in use.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use config.prediction_type everywhere
* Add a couple of Flax prediction type tests.
* make style
* fix register deprecated arg
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Match the generator device to the pipeline for DDPM and DDIM
* style
* fix
* update values
* fix fast tests
* trigger slow tests
* deprecate
* last value fixes
* mps fixes
* [Scheduler] Move predict epsilon to init
* up
* uP
* uP
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
* Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"
This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61.
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
* fixed code style
Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>