Commit Graph

38 Commits

Author SHA1 Message Date
Patrick von Platen 1d4ad34af0
[Dreambooth] Make compatible with alt diffusion (#1470)
* [Dreambooth] Make compatible with alt diffusion

* make style

* add example
2022-11-30 13:48:17 +01:00
Suraj Patil 6c56f05097
v-prediction training support (#1455)
* add get_velocity

* add v prediction for training

* fix saving

* add revision arg

* fix saving

* save checkpoints dreambooth

* fix saving embeds

* add instruction in readme

* quality

* noise_pred -> model_pred
2022-11-28 17:46:54 +01:00
Suraj Patil 8b84f85192
[examples] fix mixed_precision arg (#1359)
* use accelerator to check mixed_precision

* default `mixed_precision` to `None`

* pass mixed_precision to accelerate launch
2022-11-22 13:35:23 +01:00
Patrick von Platen 195e437ac5
Correct path to schedlure (#1322)
* [Examples] Correct path

* uP
2022-11-18 12:32:49 +01:00
Glenn 'devalias' Grant db1cb0b1a2
[dreambooth] link to bitsandbytes readme for installation (#1229)
* add 'conda install cudatoolkit' to dreambooth 'training on 16GB' example 

fixes https://github.com/huggingface/diffusers/issues/1207

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 12:53:54 +01:00
camenduru 663f0c1963
[Flax] fix extra copy pasta 🍝 (#1187) 2022-11-09 11:34:15 +01:00
Yuta Hayashibe 555203e1fa
Warning for invalid options without "--with_prior_preservation" (#1065)
* Make errors for invalid options without "--with_prior_preservation"

* Make --instance_prompt required

* Removed needless check because --instance_data_dir is marked with required

* Updated messages

* Use logger.warning instead of raise errors

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-08 14:31:13 +01:00
Duong A. Nguyen ac4c695d97
[Flax examples] Load text encoder from subfolder (#1147)
load text encoder from subfolder
2022-11-07 21:26:59 +01:00
Duong A. Nguyen c62b3a2e7e
[Flax] Fix sample batch size DreamBooth (#1129)
fix sample batch size
2022-11-04 13:49:57 +01:00
Yuta Hayashibe 33c487455e
Fix padding in dreambooth (#1030) 2022-11-02 16:37:05 +01:00
Jonathan Rahn 0025626cd9
fix typo in examples dreambooth README.md (#1073)
Update README.md

fixed typo
2022-11-02 13:15:30 +01:00
Patrick von Platen c18941b01a
[Better scheduler docs] Improve usage examples of schedulers (#890)
* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-31 17:26:30 +01:00
Suraj Patil eceeebdf91
Update train_dreambooth.py 2022-10-27 15:51:11 +02:00
Suraj Patil 52f2128dc6
update readme for flax examples (#1026) 2022-10-27 15:25:25 +02:00
Duong A. Nguyen 90f91adb0e
[Flax] Add DreamBooth (#1001)
* [Flax] Add DreamBooth

* fix sample rng

* style

* not reuse rng

* add dtype for mixed precision training

* Add Flax example
2022-10-27 14:25:04 +02:00
Duong A. Nguyen 4623f095f3
[DreamBooth] Set train mode for text encoder (#1012)
Set train mode for text encoder
2022-10-27 14:19:13 +02:00
Suraj Patil e92a603cab fix dreambooth script. (#1017)
make input_args optional
2022-10-27 11:44:06 +02:00
Brian Whicheloe d3d22ce5a8
Small modification to enable usage by external scripts (#956)
* Make training code usable by external scripts

Add parameter inputs to training and argument parsing function to allow this script to be used by an external call.

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-26 18:46:56 +02:00
Simon Kirsten 8332c1a6d9
Enable multi-process DataLoader for dreambooth (#950) 2022-10-26 17:24:48 +02:00
Yuta Hayashibe 4b9f58952a
Add --pretrained_model_name_revision option to train_dreambooth.py (#933)
* Add --pretrained_model_name_revision option to train_dreambooth.py

* Renamed --pretrained_model_name_revision to --revision
2022-10-25 21:38:23 +02:00
Hanusz Leszek 4bf675f465
Dreambooth class image generation: using unique names to avoid overwriting existing image (#847)
* Add an underscore to filename if it already exists

* Use sha1sum hash instead of adding underscores
2022-10-20 15:56:15 +02:00
Suraj Patil 7674a36a34
[dreambooth] dont use safety check when generating prior images (#922)
dont' use safety check when generating prior images
2022-10-20 13:52:11 +02:00
Hanusz Leszek ce7d96681c
DOC Dreambooth Add --sample_batch_size=1 to the 8 GB dreambooth example script (#829)
Add --sample_batch_size=1 to the 8 GB dreambooth script
2022-10-20 13:44:37 +02:00
Suraj Patil fbe807bf57
[dreambooth] allow fine-tuning text encoder (#883)
* allow fine-tuning text encoder

* fix a few things

* update readme
2022-10-18 17:28:51 +02:00
Omar Sanseviero b8c4d5801c
Remove unneeded use_auth_token (#839) 2022-10-14 13:27:03 +02:00
Anton Lozhkov e001fededf
Fix dreambooth loss type with prior_preservation and fp16 (#826)
Fix dreambooth loss type with prior preservation
2022-10-13 15:41:19 +02:00
spezialspezial e895952816
Eventually preserve this typo? :) (#804) 2022-10-11 20:06:24 +02:00
Henrik Forstén 81bdbb5e2a
DreamBooth DeepSpeed support for under 8 GB VRAM training (#735)
* Support deepspeed

* Dreambooth DeepSpeed documentation

* Remove unnecessary casts, documentation

Due to recent commits some casts to half precision are not necessary
anymore.

Mention that DeepSpeed's version of Adam is about 2x faster.

* Review comments
2022-10-10 21:29:27 +02:00
YaYaB 906e4105d7
Fix push_to_hub for dreambooth and textual_inversion (#748)
* Fix push_to_hub for dreambooth and textual_inversion

* Use repo.push_to_hub instead of push_to_hub
2022-10-07 11:50:28 +02:00
Patrick von Platen 4deb16e830
[Docs] Advertise fp16 instead of autocast (#740)
up
2022-10-05 22:20:53 +02:00
Suraj Patil 19e559d5e9
remove use_auth_token from remaining places (#737)
remove use_auth_token
2022-10-05 17:40:49 +02:00
Pierre LeMoine 08d4fb6e9f
[dreambooth] Using already created `Path` in dataset (#681)
using already created `Path` in dataset
2022-10-05 12:14:30 +02:00
Yuta Hayashibe 7e92c5bc73
Fix typos (#718)
* Fix typos

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:22:14 +02:00
Suraj Patil 14f4af8f5b
[dreambooth] fix applying clip_grad_norm_ (#686)
fix applying clip grad norm
2022-10-03 10:54:01 +02:00
Suraj Patil 210be4fe71
[examples] update transfomers version (#665)
update transfomrers version in example
2022-09-29 11:16:28 +02:00
Suraj Patil e5eed5235b
[dreambooth] update install section (#650)
update install section
2022-09-27 17:32:21 +02:00
Suraj Patil ac665b6484
[examples/dreambooth] don't pass tensor_format to scheduler. (#649)
don't pass tensor_format
2022-09-27 17:24:12 +02:00
Zhenhuan Liu 3b747de845
Add training example for DreamBooth. (#554)
* Add training example for DreamBooth.

* Fix bugs.

* Update readme and default hyperparameters.

* Reformatting code with black.

* Update for multi-gpu trianing.

* Apply suggestions from code review

* improgve sampling

* fix autocast

* improve sampling more

* fix saving

* actuallu fix saving

* fix saving

* improve dataset

* fix collate fun

* fix collate_fn

* fix collate fn

* fix key name

* fix dataset

* fix collate fn

* concat batch in collate fn

* add grad ckpt

* add option for 8bit adam

* do two forward passes for prior preservation

* Revert "do two forward passes for prior preservation"

This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.

* add option for prior_loss_weight

* add option for clip grad norm

* add more comments

* update readme

* update readme

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add docstr for dataset

* update the saving logic

* Update examples/dreambooth/README.md

* remove unused imports

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:01:18 +02:00