Commit Graph

31 Commits

Author SHA1 Message Date
Lucain 5ea4be86ab
Create repo before cloning in examples (#2047)
* Create repo before cloning in examples

* code quality
2023-01-20 16:38:37 +01:00
Anton Lozhkov 7c82a16fc1
Fix EMA for multi-gpu training in the unconditional example (#1930)
* improve EMA

* style

* one EMA model

* quality

* fix tests

* fix test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* re organise the unconditional script

* backwards compatibility

* default to init values for some args

* fix ort script

* issubclass => isinstance

* update state_dict

* docstr

* doc

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use .to if device is passed

* deprecate device

* make flake happy

* fix typo

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 11:35:55 +01:00
Suraj Patil f861cde14f
[train_unconditional] fix LR scheduler init (#2010)
fix lr scheduler
2023-01-17 10:11:46 +01:00
Prathik Rao 8aa4372aea
reorder model wrap + bug fix (#1799)
* reorder model wrap

* bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Prathik Rao 847daf25c7
update train_unconditional_ort.py (#1775)
* reflect changes

* run make style

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Anish Shah 9f657f106d
[Examples] Update train_unconditional.py to include logging argument for Wandb (#1719)
Update train_unconditional.py

Add logger flag to choose between tensorboard and wandb
2022-12-19 16:57:03 +01:00
Pedro Cuenca badddee0ef
Add state checkpointing to other training scripts (#1687)
* Add state checkpointing to other training scripts

* Fix first_epoch

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update Dreambooth checkpoint help message.

* Dreambooth docs: checkpoints, inference from a checkpoint.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 19:49:40 +01:00
Prathik Rao 7c823c2ed7
manually update train_unconditional_ort (#1694)
* manually update train_unconditional_ort

* formatting

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-14 11:35:41 +01:00
Prathik Rao 4645e28355
tensor format ort bug fix (#1557)
bug fix

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: anton- <anton@huggingface.co>
2022-12-12 13:56:02 +01:00
Suraj Patil c228331068
[examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Anton Lozhkov 9276b1e148
Replace deprecated hub utils in `train_unconditional_ort` (#1504)
* Replace deprecated hub utils in `train_unconditional_ort`

* typo
2022-12-01 16:00:52 +01:00
Anton Lozhkov 999044596a
Bump to 0.10.0.dev0 + deprecations (#1490) 2022-11-30 15:27:56 +01:00
Anton Lozhkov db7b7bd983
[Train unconditional] Unwrap model before EMA (#1469) 2022-11-29 13:45:42 +01:00
Pedro Cuenca d52388f486
Deprecate `predict_epsilon` (#1393)
* Adapt ddpm, ddpmsolver to prediction_type.

* Deprecate predict_epsilon in __init__.

* Bring FlaxDDIMScheduler up to date with DDIMScheduler.

* Set prediction_type as an ivar for consistency.

* Convert pipeline_ddpm

* Adapt tests.

* Adapt unconditional training script.

* Adapt BitDiffusion example.

* Add missing kwargs in dpmsolver_multistep

* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.

* make style

* Remove import no longer in use.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Use config.prediction_type everywhere

* Add a couple of Flax prediction type tests.

* make style

* fix register deprecated arg

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-25 14:02:15 +01:00
Prathik Rao 3346ec3acd
integrate ort (#1110)
* integrate ort

* use return_dict=False

* revert unet return value change

* revert unet return value change

* add note to readme

* adjust readme

* add contact

* `make style`

Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-17 15:48:41 +01:00
Anton Lozhkov 7d0c272939
Match the generator device to the pipeline for DDPM and DDIM (#1222)
* Match the generator device to the pipeline for DDPM and DDIM

* style

* fix

* update values

* fix fast tests

* trigger slow tests

* deprecate

* last value fixes

* mps fixes
2022-11-09 23:00:23 +01:00
Patrick von Platen 249d9bc0e7
[Scheduler] Move predict epsilon to init (#1155)
* [Scheduler] Move predict epsilon to init

* up

* uP

* uP

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-08 18:08:08 +01:00
Denis cbcd0512f0
Training to predict x0 in training example (#1031)
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly

* Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"

This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61.

* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly

* fixed code style

Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>
2022-11-02 17:43:40 +01:00
Anton Lozhkov a6314a8d4e
Add `--dataloader_num_workers` to the DDPM training example (#1027) 2022-10-27 15:55:36 +02:00
Denis 939ec17e91
Probably nicer to specify dependency on tensorboard in the training example (#998)
tensorboard import in readme, otherwise accelerator.trackers[0] out of range

Co-authored-by: lukovnikov <lukovnikov@users.noreply.github.com>
2022-10-27 15:55:18 +02:00
Anton Lozhkov fbcc383340
Deprecate `init_git_repo`, refactor `train_unconditional.py` (#1022)
Deprecate `init_git_repo` and `push_to_hub`, refactor `train_unconditional.py`
2022-10-27 15:16:59 +02:00
Pedro Cuenca 4dce37432b
Fix training push_to_hub (unconditional image generation): models were not saved before pushing to hub (#868)
Fix: models were not saved before pushing to hub.
2022-10-17 15:28:56 +02:00
YaYaB 906e4105d7
Fix push_to_hub for dreambooth and textual_inversion (#748)
* Fix push_to_hub for dreambooth and textual_inversion

* Use repo.push_to_hub instead of push_to_hub
2022-10-07 11:50:28 +02:00
Patrick von Platen 78744b6a8f
No more use_auth_token=True (#733)
* up

* uP

* uP

* make style

* Apply suggestions from code review

* up

* finish
2022-10-05 17:16:15 +02:00
Suraj Patil 14b9754923
[train_unconditional] fix applying clip_grad_norm_ (#721)
fix clip_grad_norm_
2022-10-04 19:04:05 +02:00
Kashif Rasul bd8df2da89
[Pytorch] Pytorch only schedulers (#534)
* pytorch only schedulers

* fix style

* remove match_shape

* pytorch only ddpm

* remove SchedulerMixin

* remove numpy from karras_ve

* fix types

* remove numpy from lms_discrete

* remove numpy from pndm

* fix typo

* remove mixin and numpy from sde_vp and ve

* remove remaining tensor_format

* fix style

* sigmas has to be torch tensor

* removed set_format in readme

* remove set format from docs

* remove set_format from pipelines

* update tests

* fix typo

* continue to use mixin

* fix imports

* removed unsed imports

* match shape instead of assuming image shapes

* remove import typo

* update call to add_noise

* use math instead of numpy

* fix t_index

* removed commented out numpy tests

* timesteps needs to be discrete

* cast timesteps to int in flax scheduler too

* fix device mismatch issue

* small fix

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Yuta Hayashibe 76d492ea49
Fix typos and add Typo check GitHub Action (#483)
* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-16 15:36:51 +02:00
Kashif Rasul b34be039f9
Karras VE, DDIM and DDPM flax schedulers (#508)
* beta never changes removed from state

* fix typos in docs

* removed unused var

* initial ddim flax scheduler

* import

* added dummy objects

* fix style

* fix typo

* docs

* fix typo in comment

* set return type

* added flax ddom

* fix style

* remake

* pass PRNG key as argument and split before use

* fix doc string

* use config

* added flax Karras VE scheduler

* make style

* fix dummy

* fix ndarray type annotation

* replace returns a new state

* added lms_discrete scheduler

* use self.config

* add_noise needs state

* use config

* use config

* docstring

* added flax score sde ve

* fix imports

* fix typos
2022-09-15 15:55:48 +02:00
Patrick von Platen cc59b05635
[ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples (#334)
* add outputs for models

* add for pipelines

* finish schedulers

* better naming

* adapt tests as well

* replace dict access with . access

* make schedulers works

* finish

* correct readme

* make  bcp compatible

* up

* small fix

* finish

* more fixes

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/vae.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Adapt model outputs

* Apply more suggestions

* finish examples

* correct

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Suraj Patil 1b1d6444c6
[train_unconditional] fix gradient accumulation. (#308)
fix grad accum
2022-09-01 16:02:15 +02:00
Patrick von Platen a4d5b59f13
Refactor Pipelines / Community pipelines and add better explanations. (#257)
* [Examples readme]

* Improve

* more

* save

* save

* save more

* up

* up

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* make deterministic

* up

* better

* up

* add generator to img2img pipe

* save

* make pipelines deterministic

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* apply all changes

* more correctnios

* finish

* improve table

* more fixes

* up

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Update src/diffusers/pipelines/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add better links

* fix more

* finish

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 18:43:42 +02:00