Commit Graph

39 Commits

Author SHA1 Message Date
Pedro Cuenca f4dddaf5ee
[textual_inversion] Fix resuming state when using gradient checkpointing (#2072)
* Fix resuming state when using gradient checkpointing.

Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).

* style
2023-01-24 10:25:41 +01:00
Suraj Patil 6fedb29f11
[examples] add dataloader_num_workers argument (#2070)
add --dataloader_num_workers argument
2023-01-23 10:58:03 +01:00
Lucain 5ea4be86ab
Create repo before cloning in examples (#2047)
* Create repo before cloning in examples

* code quality
2023-01-20 16:38:37 +01:00
Patrick von Platen ed616bd8a8
[LoRA] Add LoRA training script (#1884)
* [Lora] first upload

* add first lora version

* upload

* more

* first training

* up

* correct

* improve

* finish loaders and inference

* up

* up

* fix more

* up

* finish more

* finish more

* up

* up

* change year

* revert year change

* Change lines

* Add cloneofsimo as co-author.

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>

* finish

* fix docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* upload

* finish

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-18 18:05:51 +01:00
Patrick von Platen 522f8aa7b2
[Black] Update black library (#2007) 2023-01-16 15:16:28 +01:00
Alex Redden 19a0ce4a47
Fix lr-scaling store_true & default=True cli argument for textual_inversion training. (#1090)
Fix default lr-scaling cli argument
2023-01-04 15:43:41 +01:00
Patrick von Platen 8ed08e4270
[Deterministic torch randn] Allow tensors to be generated on CPU (#1902)
* [Deterministic torch randn] Allow tensors to be generated on CPU

* fix more

* up

* fix more

* up

* Update src/diffusers/utils/torch_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Apply suggestions from code review

* up

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-03 18:22:40 +01:00
Pedro Cuenca 8c14ca3d43
Fixes to the help for `report_to` in training scripts (#1888)
Fixes to the help for report_to in training scripts.
2023-01-02 15:53:28 +01:00
Suraj Patil fa1f4701e8
[examples] misc fixes (#1886)
* misc fixes

* more comments

* Update examples/textual_inversion/textual_inversion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* set transformers verbosity to warning

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-02 14:09:01 +01:00
Suraj Patil e4fe941312
[examples] update loss computation (#1861)
update loss computation
2022-12-30 14:32:38 +01:00
Suraj Patil 9ea7052f0e
[textual inversion] add gradient checkpointing and small fixes. (#1848)
Co-authored-by: Henrik Forstén <henrik.forsten@gmail.com>

* update TI script

* make flake happy

* fix typo
2022-12-29 15:02:29 +01:00
Katsuya 8874027efc
Make xformers optional even if it is available (#1753)
* Make xformers optional even if it is available

* Raise exception if xformers is used but not available

* Rename use_xformers to enable_xformers_memory_efficient_attention

* Add a note about xformers in README

* Reformat code style
2022-12-27 19:47:50 +01:00
Suraj Patil 9be94d9c66
[textual_inversion] unwrap_model text encoder before accessing weights (#1816)
* unwrap_model text encoder before accessing weights

* fix another call

* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen f2acfb67ac
Remove hardcoded names from PT scripts (#1778)
* Remove hardcoded names from PT scripts

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
jiqing-feng c5f04d4e34
apply amp bf16 on textual inversion (#1465)
* add conf.yaml

* enable bf16

enable amp bf16 for unet forward

fix style

fix readme

remove useless file

* change amp to full bf16

* align

* make stype

* fix format
2022-12-15 21:15:23 +01:00
Pedro Cuenca badddee0ef
Add state checkpointing to other training scripts (#1687)
* Add state checkpointing to other training scripts

* Fix first_epoch

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update Dreambooth checkpoint help message.

* Dreambooth docs: checkpoints, inference from a checkpoint.

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 19:49:40 +01:00
Patrick von Platen 69de9b2eaa
[Textual Inversion] Do not update other embeddings (#1665) 2022-12-12 17:44:39 +01:00
Pedro Cuenca 0c18d02cc9
Remove spurious arg in training scripts (#1644)
Remove spurious arg in training scripts.
2022-12-10 13:57:20 +01:00
Patrick von Platen 6b68afd8e4
do not automatically enable xformers (#1640)
* do not automatically enable xformers

* uP
2022-12-09 18:28:36 +01:00
Suraj Patil c228331068
[examples] add check_min_version (#1550)
* add check_min_version for examples

* move __version__ to the top

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix comment

* fix error_message

* adapt the install message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-06 14:36:50 +01:00
Suraj Patil 634be6e53d
[examples] use from_pretrained to load scheduler (#1549)
us from_pretrained to load scheduler
2022-12-05 15:32:24 +01:00
allo- d1bcbf38ca
[textual_inversion] Add an option for only saving the embeddings (#781)
[textual_inversion] Add an option to only save embeddings

Add an command line option --only_save_embeds to the example script, for
not saving the full model. Then only the learned embeddings are saved,
which can be added to the original model at runtime in a similar way as
they are created in the training script.
Saving the full model is forced when --push_to_hub is used. (Implements #759)
2022-12-05 14:45:13 +01:00
Suraj Patil 6c56f05097
v-prediction training support (#1455)
* add get_velocity

* add v prediction for training

* fix saving

* add revision arg

* fix saving

* save checkpoints dreambooth

* fix saving embeds

* add instruction in readme

* quality

* noise_pred -> model_pred
2022-11-28 17:46:54 +01:00
Patrick von Platen 195e437ac5
Correct path to schedlure (#1322)
* [Examples] Correct path

* uP
2022-11-18 12:32:49 +01:00
Patrick von Platen 245e9cc7ff fix make style 2022-11-17 15:03:31 +01:00
Pedro Cuenca 1138d63b51
Temporary local test for PIL_INTERPOLATION (#1317)
* Temporary local test for PIL_INTERPOLATION

* Fix examples too.
2022-11-16 18:42:21 +01:00
Patrick von Platen 65d136e067
Add improved handling of pil (#1309)
* Better error message for transformers dummy

* [PIL] Better deprecation functionality

* up
2022-11-16 15:58:22 +01:00
Patrick von Platen c18941b01a
[Better scheduler docs] Improve usage examples of schedulers (#890)
* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-31 17:26:30 +01:00
YaYaB 906e4105d7
Fix push_to_hub for dreambooth and textual_inversion (#748)
* Fix push_to_hub for dreambooth and textual_inversion

* Use repo.push_to_hub instead of push_to_hub
2022-10-07 11:50:28 +02:00
Suraj Patil 19e559d5e9
remove use_auth_token from remaining places (#737)
remove use_auth_token
2022-10-05 17:40:49 +02:00
Isamu Isozaki 7f31142c2e
Added script to save during textual inversion training. Issue 524 (#645)
* Added script to save during training

* Suggested changes
2022-09-28 17:26:02 +02:00
Kashif Rasul bd8df2da89
[Pytorch] Pytorch only schedulers (#534)
* pytorch only schedulers

* fix style

* remove match_shape

* pytorch only ddpm

* remove SchedulerMixin

* remove numpy from karras_ve

* fix types

* remove numpy from lms_discrete

* remove numpy from pndm

* fix typo

* remove mixin and numpy from sde_vp and ve

* remove remaining tensor_format

* fix style

* sigmas has to be torch tensor

* removed set_format in readme

* remove set format from docs

* remove set_format from pipelines

* update tests

* fix typo

* continue to use mixin

* fix imports

* removed unsed imports

* match shape instead of assuming image shapes

* remove import typo

* update call to add_noise

* use math instead of numpy

* fix t_index

* removed commented out numpy tests

* timesteps needs to be discrete

* cast timesteps to int in flax scheduler too

* fix device mismatch issue

* small fix

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Yuta Hayashibe 76d492ea49
Fix typos and add Typo check GitHub Action (#483)
* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-16 15:36:51 +02:00
Kashif Rasul b34be039f9
Karras VE, DDIM and DDPM flax schedulers (#508)
* beta never changes removed from state

* fix typos in docs

* removed unused var

* initial ddim flax scheduler

* import

* added dummy objects

* fix style

* fix typo

* docs

* fix typo in comment

* set return type

* added flax ddom

* fix style

* remake

* pass PRNG key as argument and split before use

* fix doc string

* use config

* added flax Karras VE scheduler

* make style

* fix dummy

* fix ndarray type annotation

* replace returns a new state

* added lms_discrete scheduler

* use self.config

* add_noise needs state

* use config

* use config

* docstring

* added flax score sde ve

* fix imports

* fix typos
2022-09-15 15:55:48 +02:00
Patrick von Platen b2b3b1a8ab
[Black] Update black (#433)
* Update black

* update table
2022-09-08 22:10:01 +02:00
Suraj Patil ac84c2fa5a
[textual-inversion] fix saving embeds (#387)
fix saving embeds
2022-09-07 15:49:16 +05:30
Patrick von Platen cc59b05635
[ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples (#334)
* add outputs for models

* add for pipelines

* finish schedulers

* better naming

* adapt tests as well

* replace dict access with . access

* make schedulers works

* finish

* correct readme

* make  bcp compatible

* up

* small fix

* finish

* more fixes

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/vae.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Adapt model outputs

* Apply more suggestions

* finish examples

* correct

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Suraj Patil 55d6453fce
[textual_inversion] use tokenizer.add_tokens to add placeholder_token (#357)
use add_tokens
2022-09-05 13:12:49 +05:30
Suraj Patil d0d3e24ec1
Textual inversion (#266)
* add textual inversion script

* make the loop work

* make coarse_loss optional

* save pipeline after training

* add arg pretrained_model_name_or_path

* fix saving

* fix gradient_accumulation_steps

* style

* fix progress bar steps

* scale lr

* add argument to accept style

* remove unused args

* scale lr using num gpus

* load tokenizer using args

* add checks when converting init token to id

* improve commnets and style

* document args

* more cleanup

* fix default adamw arsg

* TextualInversionWrapper -> CLIPTextualInversionWrapper

* fix tokenizer loading

* Use the CLIPTextModel instead of wrapper

* clean dataset

* remove commented code

* fix accessing grads for multi-gpu

* more cleanup

* fix saving on multi-GPU

* init_placeholder_token_embeds

* add seed

* fix flip

* fix multi-gpu

* add utility methods in wrapper

* remove ipynb

* don't use wrapper

* dont pass vae an dunet to accelerate prepare

* bring back accelerator.accumulate

* scale latents

* use only one progress bar for steps

* push_to_hub at the end of training

* remove unused args

* log some important stats

* store args in tensorboard

* pretty comments

* save the trained embeddings

* mobe the script up

* add requirements file

* more cleanup

* fux typo

* begin readme

* style -> learnable_property

* keep vae and unet in eval mode

* address review comments

* address more comments

* removed unused args

* add train command in readme

* update readme
2022-09-02 14:23:52 +05:30