Commit Graph

281 Commits

Author SHA1 Message Date
Vladimir Mandic 192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Shondoit bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
AUTOMATIC bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
Vladimir Mandic f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
Jim Hays c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
AUTOMATIC1111 c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
AUTOMATIC1111 a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
Use devices.autocast() and fix MPS randn issues
2022-12-03 09:58:08 +03:00
PhytoEpidemic 119a945ef7
Fix divide by 0 error
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
2022-12-02 12:16:29 -06:00
brkirch 4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
AUTOMATIC1111 39827a3998
Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords
resolve [name] after resolving [filewords] in training
2022-11-27 22:46:49 +03:00
AUTOMATIC b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
flamelaw 755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
flamelaw 89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw 5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
AUTOMATIC c81d440d87 moved deepdanbooru to pure pytorch implementation 2022-11-20 16:39:20 +03:00
flamelaw 2d22d72cda fix random sampling with pin_memory 2022-11-20 16:14:27 +09:00
flamelaw a4a5735d0a remove unnecessary comment 2022-11-20 12:38:18 +09:00
flamelaw bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC1111 89daf778fb
Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing
Add interrupt button to preprocessing
2022-11-19 13:26:33 +03:00
AUTOMATIC cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
space-nuko c8c40c8a64 Add interrupt button to preprocessing 2022-11-17 18:05:29 -08:00
parasi 9a1aff645a resolve [name] after resolving [filewords] in training 2022-11-13 13:49:28 -06:00
AUTOMATIC1111 73776907ec
Merge pull request #4117 from TinkTheBoush/master
Adding optional tag shuffling for training
2022-11-11 15:46:20 +03:00
KyuSeok Jung a1e271207d
Update dataset.py 2022-11-11 10:56:53 +09:00
KyuSeok Jung b19af67d29
Update dataset.py 2022-11-11 10:54:19 +09:00
KyuSeok Jung 13a2f1dca3
adding tag drop out option 2022-11-11 10:29:55 +09:00
Muhammad Rizqi Nur d85c2cb2d5 Merge branch 'master' into gradient-clipping 2022-11-09 16:29:37 +07:00
AUTOMATIC 8011be33c3 move functions out of main body for image preprocessing for easier hijacking 2022-11-08 08:37:05 +03:00
Muhammad Rizqi Nur bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
TinkTheBoush 821e2b883d change option position to Training setting 2022-11-04 19:39:03 +09:00
Fampai 39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur 237e79c77d Merge branch 'master' into gradient-clipping 2022-11-02 20:48:58 +07:00
KyuSeok Jung af6fba2475
Merge branch 'master' into master 2022-11-02 17:10:56 +09:00
Nerogar cffc240a73 fixed textual inversion training with inpainting models 2022-11-01 21:02:07 +01:00
TinkTheBoush 467cae167a append_tag_shuffle 2022-11-01 23:29:12 +09:00
Fampai 890e68aaf7 Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai 3b0127e698 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations 2022-10-31 09:54:51 -04:00
Fampai 006756f9cd Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
AUTOMATIC1111 17a2076f72
Merge pull request #3928 from R-N/validate-before-load
Optimize training a little
2022-10-30 09:51:36 +03:00
Muhammad Rizqi Nur 3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur a27d19de2e Additional assert on dataset 2022-10-29 19:44:05 +07:00
Muhammad Rizqi Nur 3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur ab27c111d0 Add input validations before loading dataset for training 2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur ef4c94e1cf Improve lr schedule error message 2022-10-29 15:42:51 +07:00
Muhammad Rizqi Nur a5f3adbdd7 Allow trailing comma in learning rate 2022-10-29 15:37:24 +07:00
Muhammad Rizqi Nur 05e2e40537 Merge branch 'master' into gradient-clipping 2022-10-29 15:04:21 +07:00
AUTOMATIC1111 810e6a407d
Merge pull request #3858 from R-N/log-csv
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
Muhammad Rizqi Nur 9ceef81f77 Fix log off by 1 2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur 16451ca573 Learning rate sched syntax support for grad clipping 2022-10-28 17:16:23 +07:00
Muhammad Rizqi Nur 1618df41ba Gradient clipping for textual embedding 2022-10-28 10:31:27 +07:00
FlameLaw a0a7024c67
Fix random dataset shuffle on TI 2022-10-28 02:13:48 +09:00
DepFA 737eb28fac typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir 2022-10-26 17:38:08 +03:00
timntorres f4e1464217 Implement PR #3625 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres 4875a6c217 Implement PR #3309 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres c2dc9bfa89 Implement PR #3189 but for embeddings. 2022-10-26 10:14:35 +03:00
AUTOMATIC cbb857b675 enable creating embedding with --medvram 2022-10-26 09:44:02 +03:00
captin411 df0c5ea29d update default weights 2022-10-25 17:06:59 -07:00
captin411 54f0c14824 download better face detection module dynamically 2022-10-25 16:14:13 -07:00
captin411 db8ed5fe5c Focal crop UI elements 2022-10-25 15:22:29 -07:00
captin411 6629446a2f Merge branch 'master' into focal-point-cropping 2022-10-25 13:22:27 -07:00
captin411 3e6c2420c1 improve debug markers, fix algo weighting 2022-10-25 13:13:12 -07:00
Melan 18f86e41f6 Removed two unused imports 2022-10-24 17:21:18 +02:00
captin411 1be5933ba2
auto cropping now works with non square crops 2022-10-23 04:11:07 -07:00
AUTOMATIC f49c08ea56 prevent error spam when processing images without txt files for captions 2022-10-21 18:46:02 +03:00
AUTOMATIC1111 5e9afa5c8a
Merge branch 'master' into fix/train-preprocess-keep-ratio 2022-10-21 18:36:29 +03:00
DepFA 306e2ff6ab Update image_embedding.py 2022-10-21 16:47:37 +03:00
DepFA d0ea471b0c Use opts in textual_inversion image_embedding.py for dynamic fonts 2022-10-21 16:47:37 +03:00
AUTOMATIC 7d6b388d71 Merge branch 'ae' 2022-10-21 13:35:01 +03:00
AUTOMATIC1111 0c5522ea21
Merge branch 'master' into training-help-text 2022-10-21 09:57:55 +03:00
guaneec b69c37d25e Allow datasets with only 1 image in TI 2022-10-21 09:54:09 +03:00
Melan 8f59129847 Some changes to the tensorboard code and hypernetwork support 2022-10-20 22:37:16 +02:00
Melan a6d593a6b5 Fixed a typo in a variable 2022-10-20 19:43:21 +02:00
Milly 85dd62c4c7 train: ui: added `Split image threshold` and `Split image overlap ratio` to preprocess 2022-10-20 23:35:01 +09:00
Milly 9681419e42 train: fixed preprocess image ratio 2022-10-20 23:32:41 +09:00
Melan 29e74d6e71 Add support for Tensorboard for training embeddings 2022-10-20 16:26:16 +02:00
captin411 0ddaf8d202
improve face detection a lot 2022-10-20 00:34:55 -07:00
DepFA 858462f719
do caption copy for both flips 2022-10-20 02:57:18 +01:00
captin411 59ed744383
face detection algo, configurability, reusability
Try to move the crop in the direction of a face if it is present

More internal configuration options for choosing weights of each of the algorithm's findings

Move logic into its module
2022-10-19 17:19:02 -07:00
DepFA 9b65c4ecf4
pass preprocess_txt_action param 2022-10-20 00:49:23 +01:00
DepFA fbcce66601
add existing caption file handling 2022-10-20 00:46:54 +01:00
DepFA c3835ec85c
pass overwrite old flag 2022-10-20 00:24:24 +01:00
DepFA 0087079c2d
allow overwrite old embedding 2022-10-20 00:10:59 +01:00
captin411 41e3877be2
fix entropy point calculation 2022-10-19 13:44:59 -07:00
captin411 abeec4b630
Add auto focal point cropping to Preprocess images
This algorithm plots a bunch of points of interest on the source
image and averages their locations to find a center.

Most points come from OpenCV.  One point comes from an
entropy model. OpenCV points account for 50% of the weight and the
entropy based point is the other 50%.

The center of all weighted points is calculated and a bounding box
is drawn as close to centered over that point as possible.
2022-10-19 03:18:26 -07:00
MalumaDev 1997ccff13
Merge branch 'master' into test_resolve_conflicts 2022-10-18 08:55:08 +02:00
DepFA 62edfae257 print list of embeddings on reload 2022-10-17 08:42:17 +03:00
MalumaDev ae0fdad64a
Merge branch 'master' into test_resolve_conflicts 2022-10-16 17:55:58 +02:00
MalumaDev 9324cdaa31 ui fix, re organization of the code 2022-10-16 17:53:56 +02:00
AUTOMATIC 0c5fa9a681 do not reload embeddings from disk when doing textual inversion 2022-10-16 09:09:04 +03:00
MalumaDev 97ceaa23d0
Merge branch 'master' into test_resolve_conflicts 2022-10-16 00:06:36 +02:00
DepFA b6e3b96dab Change vector size footer label 2022-10-15 17:23:39 +03:00
DepFA ddf6899df0 generalise to popular lossless formats 2022-10-15 17:23:39 +03:00
DepFA 9a1dcd78ed add webp for embed load 2022-10-15 17:23:39 +03:00
DepFA 939f16529a only save 1 image per embedding 2022-10-15 17:23:39 +03:00
DepFA 9e846083b7 add vector size to embed text 2022-10-15 17:23:39 +03:00
MalumaDev 7b7561f6e4
Merge branch 'master' into test_resolve_conflicts 2022-10-15 16:20:17 +02:00
AUTOMATIC1111 ea8aa1701a
Merge branch 'master' into master 2022-10-15 10:13:16 +03:00
AUTOMATIC c7a86f7fe9 add option to use batch size for training 2022-10-15 09:24:59 +03:00
Melan 4d19f3b7d4 Raise an assertion error if no training images have been found. 2022-10-14 22:45:26 +02:00
AUTOMATIC 03d62538ae remove duplicate code for log loss, add step, make it read from options rather than gradio input 2022-10-14 22:43:55 +03:00
AUTOMATIC 326fe7d44b Merge remote-tracking branch 'Melanpan/master' 2022-10-14 22:14:50 +03:00
AUTOMATIC c344ba3b32 add option to read generation params for learning previews from txt2img 2022-10-14 20:31:49 +03:00
MalumaDev bb57f30c2d init 2022-10-14 10:56:41 +02:00
Melan 8636b50aea Add learn_rate to csv and removed a left-over debug statement 2022-10-13 12:37:58 +02:00
Melan 1cfc2a1898 Save a csv containing the loss while training 2022-10-12 23:36:29 +02:00
Greg Fuller f776254b12 [2/?] [wip] ignore OPT_INCLUDE_RANKS for training filenames 2022-10-12 13:12:18 -07:00
AUTOMATIC 698d303b04 deepbooru: added option to use spaces or underscores
deepbooru: added option to quote (\) in tags
deepbooru/BLIP: write caption to file instead of image filename
deepbooru/BLIP: now possible to use both for captions
deepbooru: process is stopped even if an exception occurs
2022-10-12 21:55:43 +03:00
AUTOMATIC c3c8eef9fd train: change filename processing to be more simple and configurable
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
2022-10-12 20:49:47 +03:00
AUTOMATIC1111 cc5803603b
Merge pull request #2037 from AUTOMATIC1111/embed-embeddings-in-images
Add option to store TI embeddings in png chunks, and load from same.
2022-10-12 15:59:24 +03:00
DepFA 10a2de644f
formatting 2022-10-12 13:15:35 +01:00
DepFA 50be33e953
formatting 2022-10-12 13:13:25 +01:00
JC_Array f53f703aeb resolved conflicts, moved settings under interrogate section, settings only show if deepbooru flag is enabled 2022-10-11 18:12:12 -05:00
JC-Array 963d986396
Merge branch 'AUTOMATIC1111:master' into deepdanbooru_pre_process 2022-10-11 17:33:15 -05:00
AUTOMATIC 6be32b31d1 reports that training with medvram is possible. 2022-10-11 23:07:09 +03:00
DepFA 66ec505975
add file based test 2022-10-11 20:21:30 +01:00
DepFA 7e6a6e00ad
Add files via upload 2022-10-11 20:20:46 +01:00
DepFA 5f3317376b
spacing 2022-10-11 20:09:49 +01:00
DepFA 91d7ee0d09
update imports 2022-10-11 20:09:10 +01:00
DepFA aa75d5cfe8
correct conflict resolution typo 2022-10-11 20:06:13 +01:00
AUTOMATIC d6fcc6b87b apply lr schedule to hypernets 2022-10-11 22:03:05 +03:00
DepFA db71290d26
remove old caption method 2022-10-11 19:55:54 +01:00
DepFA 61788c0538
shift embedding logic out of textual_inversion 2022-10-11 19:50:50 +01:00
AUTOMATIC1111 419e539fe3
Merge branch 'learning_rate-scheduling' into learnschedule 2022-10-11 21:50:19 +03:00
DepFA c080f52cea
move embedding logic to separate file 2022-10-11 19:37:58 +01:00
AUTOMATIC d4ea5f4d86 add an option to unload models during hypernetwork training to save VRAM 2022-10-11 19:03:08 +03:00
AUTOMATIC 6d09b8d1df produce error when training with medvram/lowvram enabled 2022-10-11 18:33:57 +03:00
AUTOMATIC1111 4f96ffd0b5
Merge pull request #2201 from alg-wiki/textual__inversion
Textual Inversion: Preprocess and Training will only pick-up image files instead
2022-10-11 17:25:36 +03:00
DepFA 1eaad95533
Merge branch 'master' into embed-embeddings-in-images 2022-10-11 15:15:09 +01:00
AUTOMATIC 530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
alg-wiki 8bacbca0a1
Removed my local edits to checkpoint image generation 2022-10-11 17:35:09 +09:00
alg-wiki b2368a3bce
Switched to exception handling 2022-10-11 17:32:46 +09:00
AUTOMATIC 5de806184f Merge branch 'master' into hypernetwork-training 2022-10-11 11:14:36 +03:00
DepFA 7aa8fcac1e
use simple lcg in xor 2022-10-11 04:17:36 +01:00
JC_Array bb932dbf9f added alpha sort and threshold variables to create process method in preprocessing 2022-10-10 18:37:52 -05:00
JC-Array 47f5e216da
Merge branch 'deepdanbooru_pre_process' into master 2022-10-10 18:10:49 -05:00
DepFA e0fbe6d27e
colour depth conversion fix 2022-10-10 23:26:24 +01:00
DepFA 767202a4c3
add dependency 2022-10-10 23:20:52 +01:00
DepFA 315d5a8ed9
update data dis[play style 2022-10-10 23:14:44 +01:00
alg-wiki 907a88b2d0 Added .webp .bmp 2022-10-11 06:35:07 +09:00
Fampai 2536ecbb17 Refactored learning rate code 2022-10-10 17:10:29 -04:00
alg-wiki f0ab972f85
Merge branch 'master' into textual__inversion 2022-10-11 03:35:28 +08:00
alg-wiki bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 2022-10-11 04:30:13 +09:00
DepFA df6d0d9286
convert back to rgb as some hosts add alpha 2022-10-10 15:43:09 +01:00