Commit Graph

6 Commits

Author SHA1 Message Date
Pedro Cuenca f37d880f6a
Remove wandb from text_to_image requirements.txt (#2092) 2023-01-25 08:54:14 +01:00
Sayak Paul ffb3a26c5c
[LoRA] Adds example on text2image fine-tuning with LoRA (#2031)
* example on fine-tuning with LoRA.

* apply make quality.

* fix: pipeline loading.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply suggestions for PR review.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* apply make style and make quality.

* chore: remove mention of dreambooth from text2image.

* add: weight path and wandb run link.

* Apply suggestions from code review

* apply make style.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-23 08:31:07 +01:00
Lucain bcb476797c
Remove modelcards dependency (#2050)
* Switch to huggingface_hub.ModelCard

* Remove modelcards dependency in favor of Jinja2
2023-01-20 16:39:42 +01:00
Haofan Wang f1b726e46e
Update requirements.txt (#1623)
* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt

* Update requirements.txt

* Update requirements_flax.txt
2022-12-09 08:35:27 +01:00
Suraj Patil c228331068
[examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Suraj Patil 66a5279a94
stable diffusion fine-tuning (#356)
* begin text2image script

* loading the datasets, preprocessing & transforms

* handle input features correctly

* add gradient checkpointing support

* fix output names

* run unet in train mode not text encoder

* use no_grad instead of freezing params

* default max steps None

* pad to longest

* don't pad when tokenizing

* fix encode on multi gpu

* fix stupid bug

* add random flip

* add ema

* fix ema

* put ema on cpu

* improve EMA model

* contiguous_format

* don't warp vae and text encode in accelerate

* remove no_grad

* use randn_like

* fix resize

* improve few things

* log epoch loss

* set log level

* don't log each step

* remove max_length from collate

* style

* add report_to option

* make scale_lr false by default

* add grad clipping

* add an option to use 8bit adam

* fix logging in multi-gpu, log every step

* more comments

* remove eval for now

* adress review comments

* add requirements file

* begin readme

* begin readme

* fix typo

* fix push to hub

* populate readme

* update readme

* remove use_auth_token from the script

* address some review comments

* better mixed precision support

* remove redundant to

* create ema model early

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* better description for train_data_dir

* add diffusers in requirements

* update dataset_name_mapping

* update readme

* add inference example

Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-11 19:03:39 +02:00