Commit Graph

29 Commits

Author SHA1 Message Date
Pedro Cuenca 8f58159159
Fix a couple typos in Dreambooth readme (#2004)
Fix a couple typos in Dreambooth readme.
2023-01-16 11:38:07 +01:00
Sayak Paul 216d190178
Update README.md to include our blog post (#1998)
* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-16 09:16:54 +01:00
Katsuya 8874027efc
Make xformers optional even if it is available (#1753)
* Make xformers optional even if it is available

* Raise exception if xformers is used but not available

* Rename use_xformers to enable_xformers_memory_efficient_attention

* Add a note about xformers in README

* Reformat code style
2022-12-27 19:47:50 +01:00
Pedro Cuenca a6fb9407fd
Dreambooth docs: minor fixes (#1758)
* Section header for in-painting, inference from checkpoint.

* Inference: link to section to perform inference from checkpoint.

* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen 402b9560b2
Remove license accept ticks 2022-12-19 00:10:17 +01:00
Suraj Patil c228331068
[examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Will Berman af04479e85
[docs] [dreambooth training] default accelerate config (#1564) 2022-12-05 18:24:32 -08:00
Patrick von Platen 459b8ca81a
Research folder (#1553)
* Research folder

* Update examples/research_projects/README.md

* up
2022-12-05 17:58:35 +01:00
Adalberto 2b30b1090f
Create train_dreambooth_inpaint.py (#1091)
* Create train_dreambooth_inpaint.py

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

* Update train_dreambooth_inpaint.py

refactored train_dreambooth_inpaint with black

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

Fix prior preservation

* add instructions to readme, fix SD2 compatibility
2022-12-02 18:06:57 +01:00
Will Berman 25f850a23b
[docs] [dreambooth training] num_class_images clarification (#1508) 2022-12-02 12:12:28 +01:00
Will Berman b25ae2e6ab
[docs] [dreambooth training] accelerate.utils.write_basic_config (#1513) 2022-12-02 12:11:18 +01:00
Patrick von Platen 1d4ad34af0
[Dreambooth] Make compatible with alt diffusion (#1470)
* [Dreambooth] Make compatible with alt diffusion

* make style

* add example
2022-11-30 13:48:17 +01:00
Suraj Patil 6c56f05097
v-prediction training support (#1455)
* add get_velocity

* add v prediction for training

* fix saving

* add revision arg

* fix saving

* save checkpoints dreambooth

* fix saving embeds

* add instruction in readme

* quality

* noise_pred -> model_pred
2022-11-28 17:46:54 +01:00
Suraj Patil 8b84f85192
[examples] fix mixed_precision arg (#1359)
* use accelerator to check mixed_precision

* default `mixed_precision` to `None`

* pass mixed_precision to accelerate launch
2022-11-22 13:35:23 +01:00
Glenn 'devalias' Grant db1cb0b1a2
[dreambooth] link to bitsandbytes readme for installation (#1229)
* add 'conda install cudatoolkit' to dreambooth 'training on 16GB' example 

fixes https://github.com/huggingface/diffusers/issues/1207

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 12:53:54 +01:00
Jonathan Rahn 0025626cd9
fix typo in examples dreambooth README.md (#1073)
Update README.md

fixed typo
2022-11-02 13:15:30 +01:00
Suraj Patil 52f2128dc6
update readme for flax examples (#1026) 2022-10-27 15:25:25 +02:00
Duong A. Nguyen 90f91adb0e
[Flax] Add DreamBooth (#1001)
* [Flax] Add DreamBooth

* fix sample rng

* style

* not reuse rng

* add dtype for mixed precision training

* Add Flax example
2022-10-27 14:25:04 +02:00
Suraj Patil 7674a36a34
[dreambooth] dont use safety check when generating prior images (#922)
dont' use safety check when generating prior images
2022-10-20 13:52:11 +02:00
Hanusz Leszek ce7d96681c
DOC Dreambooth Add --sample_batch_size=1 to the 8 GB dreambooth example script (#829)
Add --sample_batch_size=1 to the 8 GB dreambooth script
2022-10-20 13:44:37 +02:00
Suraj Patil fbe807bf57
[dreambooth] allow fine-tuning text encoder (#883)
* allow fine-tuning text encoder

* fix a few things

* update readme
2022-10-18 17:28:51 +02:00
Omar Sanseviero b8c4d5801c
Remove unneeded use_auth_token (#839) 2022-10-14 13:27:03 +02:00
Henrik Forstén 81bdbb5e2a
DreamBooth DeepSpeed support for under 8 GB VRAM training (#735)
* Support deepspeed

* Dreambooth DeepSpeed documentation

* Remove unnecessary casts, documentation

Due to recent commits some casts to half precision are not necessary
anymore.

Mention that DeepSpeed's version of Adam is about 2x faster.

* Review comments
2022-10-10 21:29:27 +02:00
Patrick von Platen 4deb16e830
[Docs] Advertise fp16 instead of autocast (#740)
up
2022-10-05 22:20:53 +02:00
Suraj Patil 19e559d5e9
remove use_auth_token from remaining places (#737)
remove use_auth_token
2022-10-05 17:40:49 +02:00
Yuta Hayashibe 7e92c5bc73
Fix typos (#718)
* Fix typos

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:22:14 +02:00
Suraj Patil 210be4fe71
[examples] update transfomers version (#665)
update transfomrers version in example
2022-09-29 11:16:28 +02:00
Suraj Patil e5eed5235b
[dreambooth] update install section (#650)
update install section
2022-09-27 17:32:21 +02:00
Zhenhuan Liu 3b747de845
Add training example for DreamBooth. (#554)
* Add training example for DreamBooth.

* Fix bugs.

* Update readme and default hyperparameters.

* Reformatting code with black.

* Update for multi-gpu trianing.

* Apply suggestions from code review

* improgve sampling

* fix autocast

* improve sampling more

* fix saving

* actuallu fix saving

* fix saving

* improve dataset

* fix collate fun

* fix collate_fn

* fix collate fn

* fix key name

* fix dataset

* fix collate fn

* concat batch in collate fn

* add grad ckpt

* add option for 8bit adam

* do two forward passes for prior preservation

* Revert "do two forward passes for prior preservation"

This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.

* add option for prior_loss_weight

* add option for clip grad norm

* add more comments

* update readme

* update readme

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add docstr for dataset

* update the saving logic

* Update examples/dreambooth/README.md

* remove unused imports

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:01:18 +02:00