Commit Graph

268 Commits

Author SHA1 Message Date
dan 18a09c7e00 Simplification and bugfix 2023-01-19 17:36:23 +08:00
AUTOMATIC 924e222004 add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
dan 4688bfff55 Add auto-sized cropping UI 2023-01-17 17:16:43 +08:00
Vladimir Mandic 110d1a2d59
add fields to settings file 2023-01-15 12:41:00 -05:00
AUTOMATIC d8b90ac121 big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other 2023-01-15 18:51:04 +03:00
AUTOMATIC a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC 82725f0ac4 fix a bug caused by merge 2023-01-13 15:04:37 +03:00
AUTOMATIC1111 9cd7716753
Merge branch 'master' into tensorboard 2023-01-13 14:57:38 +03:00
AUTOMATIC1111 544e7a233e
Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_file
add gradient settings to training settings log files
2023-01-13 14:45:32 +03:00
AUTOMATIC a176d89487 print bucket sizes for training without resizing images #6620
fix an error when generating a picture with embedding in it
2023-01-13 14:32:15 +03:00
AUTOMATIC1111 486bda9b33
Merge pull request #6620 from guaneec/varsize_batch
Enable batch_size>1 for mixed-sized training
2023-01-13 14:03:31 +03:00
Josh R 0b262802b8 add gradient settings to training settings log files 2023-01-12 17:31:05 -08:00
Shondoit d52a80f7f7 Allow creation of zero vectors for TI 2023-01-12 09:22:29 +01:00
Vladimir Mandic 3f43d8a966
set descriptions 2023-01-11 10:28:55 -05:00
Lee Bousfield f9706acf43
Support loading textual inversion embeddings from safetensors files 2023-01-10 18:40:34 -07:00
dan 6be644fa04 Enable batch_size>1 for mixed-sized training 2023-01-11 05:31:58 +08:00
AUTOMATIC 1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
AUTOMATIC 43bb5190fc remove/simplify some changes from #6481 2023-01-09 22:52:23 +03:00
AUTOMATIC1111 18c001792a
Merge branch 'master' into varsize 2023-01-09 22:45:39 +03:00
AUTOMATIC 085427de0e make it possible for extensions/scripts to add their own embedding directories 2023-01-08 09:37:33 +03:00
AUTOMATIC a0c87f1fdf skip images in embeddings dir if they have a second .preview extension 2023-01-08 08:52:26 +03:00
dan 72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan 669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
dan 448b9cedab Allow variable img size 2023-01-08 02:14:36 +08:00
AUTOMATIC 79e39fae61 CLIP hijack rework 2023-01-07 01:46:13 +03:00
AUTOMATIC 683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
AUTOMATIC1111 88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
Faber 81133d4168
allow loading embeddings from subdirectories 2023-01-06 03:38:37 +07:00
Kuma fda04e620d
typo in TI 2023-01-05 18:44:19 +01:00
timntorres b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
timntorres eea8fc40e1 Add option to save ti settings to file. 2023-01-05 07:24:22 -08:00
AUTOMATIC1111 eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
AUTOMATIC 525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
AUTOMATIC 184e670126 fix the merge 2023-01-04 17:45:01 +03:00
AUTOMATIC1111 da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion 2023-01-04 17:40:19 +03:00
AUTOMATIC1111 7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic 192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Shondoit bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
AUTOMATIC bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
Vladimir Mandic f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
Jim Hays c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
AUTOMATIC1111 c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
AUTOMATIC1111 a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
Use devices.autocast() and fix MPS randn issues
2022-12-03 09:58:08 +03:00
PhytoEpidemic 119a945ef7
Fix divide by 0 error
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
2022-12-02 12:16:29 -06:00
brkirch 4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
AUTOMATIC1111 39827a3998
Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords
resolve [name] after resolving [filewords] in training
2022-11-27 22:46:49 +03:00
AUTOMATIC b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
flamelaw 755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
flamelaw 89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw 5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
AUTOMATIC c81d440d87 moved deepdanbooru to pure pytorch implementation 2022-11-20 16:39:20 +03:00
flamelaw 2d22d72cda fix random sampling with pin_memory 2022-11-20 16:14:27 +09:00
flamelaw a4a5735d0a remove unnecessary comment 2022-11-20 12:38:18 +09:00
flamelaw bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC1111 89daf778fb
Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing
Add interrupt button to preprocessing
2022-11-19 13:26:33 +03:00
AUTOMATIC cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
space-nuko c8c40c8a64 Add interrupt button to preprocessing 2022-11-17 18:05:29 -08:00
parasi 9a1aff645a resolve [name] after resolving [filewords] in training 2022-11-13 13:49:28 -06:00
AUTOMATIC1111 73776907ec
Merge pull request #4117 from TinkTheBoush/master
Adding optional tag shuffling for training
2022-11-11 15:46:20 +03:00
KyuSeok Jung a1e271207d
Update dataset.py 2022-11-11 10:56:53 +09:00
KyuSeok Jung b19af67d29
Update dataset.py 2022-11-11 10:54:19 +09:00
KyuSeok Jung 13a2f1dca3
adding tag drop out option 2022-11-11 10:29:55 +09:00
Muhammad Rizqi Nur d85c2cb2d5 Merge branch 'master' into gradient-clipping 2022-11-09 16:29:37 +07:00
AUTOMATIC 8011be33c3 move functions out of main body for image preprocessing for easier hijacking 2022-11-08 08:37:05 +03:00
Muhammad Rizqi Nur bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
TinkTheBoush 821e2b883d change option position to Training setting 2022-11-04 19:39:03 +09:00
Fampai 39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur 237e79c77d Merge branch 'master' into gradient-clipping 2022-11-02 20:48:58 +07:00
KyuSeok Jung af6fba2475
Merge branch 'master' into master 2022-11-02 17:10:56 +09:00
Nerogar cffc240a73 fixed textual inversion training with inpainting models 2022-11-01 21:02:07 +01:00
TinkTheBoush 467cae167a append_tag_shuffle 2022-11-01 23:29:12 +09:00
Fampai 890e68aaf7 Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai 3b0127e698 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations 2022-10-31 09:54:51 -04:00
Fampai 006756f9cd Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
AUTOMATIC1111 17a2076f72
Merge pull request #3928 from R-N/validate-before-load
Optimize training a little
2022-10-30 09:51:36 +03:00
Muhammad Rizqi Nur 3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur a27d19de2e Additional assert on dataset 2022-10-29 19:44:05 +07:00
Muhammad Rizqi Nur 3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur ab27c111d0 Add input validations before loading dataset for training 2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur ef4c94e1cf Improve lr schedule error message 2022-10-29 15:42:51 +07:00
Muhammad Rizqi Nur a5f3adbdd7 Allow trailing comma in learning rate 2022-10-29 15:37:24 +07:00
Muhammad Rizqi Nur 05e2e40537 Merge branch 'master' into gradient-clipping 2022-10-29 15:04:21 +07:00
AUTOMATIC1111 810e6a407d
Merge pull request #3858 from R-N/log-csv
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
Muhammad Rizqi Nur 9ceef81f77 Fix log off by 1 2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur 16451ca573 Learning rate sched syntax support for grad clipping 2022-10-28 17:16:23 +07:00
Muhammad Rizqi Nur 1618df41ba Gradient clipping for textual embedding 2022-10-28 10:31:27 +07:00
FlameLaw a0a7024c67
Fix random dataset shuffle on TI 2022-10-28 02:13:48 +09:00
DepFA 737eb28fac typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir 2022-10-26 17:38:08 +03:00
timntorres f4e1464217 Implement PR #3625 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres 4875a6c217 Implement PR #3309 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres c2dc9bfa89 Implement PR #3189 but for embeddings. 2022-10-26 10:14:35 +03:00
AUTOMATIC cbb857b675 enable creating embedding with --medvram 2022-10-26 09:44:02 +03:00