AUTOMATIC
79e39fae61
CLIP hijack rework
2023-01-07 01:46:13 +03:00
AUTOMATIC
683287d87f
rework saving training params to file #6372
2023-01-06 08:52:06 +03:00
AUTOMATIC1111
88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
...
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
Faber
81133d4168
allow loading embeddings from subdirectories
2023-01-06 03:38:37 +07:00
Kuma
fda04e620d
typo in TI
2023-01-05 18:44:19 +01:00
timntorres
b6bab2f052
Include model in log file. Exclude directory.
2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4
Clean up ti, add same behavior to hypernetwork.
2023-01-05 08:14:38 -08:00
timntorres
eea8fc40e1
Add option to save ti settings to file.
2023-01-05 07:24:22 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping
2023-01-04 19:56:35 +03:00
AUTOMATIC
525cea9245
use shared function from processing for creating dummy mask when training inpainting model
2023-01-04 17:58:07 +03:00
AUTOMATIC
184e670126
fix the merge
2023-01-04 17:45:01 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion
2023-01-04 17:40:19 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
...
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules
2023-01-03 10:34:51 -05:00
Shondoit
bddebe09ed
Save Optimizer next to TI embedding
...
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax
c65909ad16
feat(api): return more data for embeddings
2023-01-02 12:21:48 +11:00
AUTOMATIC
311354c0bb
fix the issue with training on SD2.0
2023-01-02 00:38:09 +03:00
AUTOMATIC
bdbe09827b
changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149
2022-12-31 22:49:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings
2022-12-31 11:27:02 -05:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
Jim Hays
c0355caefe
Fix various typos
2022-12-14 21:01:32 -05:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix
2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd
Use devices.autocast instead of torch.autocast
2022-11-30 10:33:42 -05:00
AUTOMATIC
b48b7999c8
Merge remote-tracking branch 'flamelaw/master'
2022-11-27 12:19:59 +03:00
flamelaw
755df94b2a
set TI AdamW default weight decay to 0
2022-11-27 00:35:44 +09:00
AUTOMATIC
ce6911158b
Add support Stable Diffusion 2.0
2022-11-26 16:10:46 +03:00
flamelaw
89d8ecff09
small fixes
2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4
fix pin_memory with different latent sampling method
2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3
Gradient accumulation, autocast fix, new latent sampling method, etc
2022-11-20 12:35:26 +09:00
AUTOMATIC
cdc8020d13
change StableDiffusionProcessing to internally use sampler name instead of sampler index
2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
bb832d7725
Simplify grad clip
2022-11-05 11:48:38 +07:00
Fampai
39541d7725
Fixes race condition in training when VAE is unloaded
...
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur
237e79c77d
Merge branch 'master' into gradient-clipping
2022-11-02 20:48:58 +07:00
Nerogar
cffc240a73
fixed textual inversion training with inpainting models
2022-11-01 21:02:07 +01:00
Fampai
890e68aaf7
Fixed minor bug
...
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai
3b0127e698
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
2022-10-31 09:54:51 -04:00
Fampai
006756f9cd
Added TI training optimizations
...
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur
cd4d59c0de
Merge master
2022-10-30 18:57:51 +07:00
Muhammad Rizqi Nur
3d58510f21
Fix dataset still being loaded even when training will be skipped
2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86
Add missing info on hypernetwork/embedding model log
...
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur
ab05a74ead
Revert "Add cleanup after training"
...
This reverts commit 3ce2bfdf95
.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur
3ce2bfdf95
Add cleanup after training
2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur
ab27c111d0
Add input validations before loading dataset for training
2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur
05e2e40537
Merge branch 'master' into gradient-clipping
2022-10-29 15:04:21 +07:00
Muhammad Rizqi Nur
9ceef81f77
Fix log off by 1
2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573
Learning rate sched syntax support for grad clipping
2022-10-28 17:16:23 +07:00
Muhammad Rizqi Nur
1618df41ba
Gradient clipping for textual embedding
2022-10-28 10:31:27 +07:00
DepFA
737eb28fac
typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
2022-10-26 17:38:08 +03:00
timntorres
f4e1464217
Implement PR #3625 but for embeddings.
2022-10-26 10:14:35 +03:00
timntorres
4875a6c217
Implement PR #3309 but for embeddings.
2022-10-26 10:14:35 +03:00
timntorres
c2dc9bfa89
Implement PR #3189 but for embeddings.
2022-10-26 10:14:35 +03:00
AUTOMATIC
cbb857b675
enable creating embedding with --medvram
2022-10-26 09:44:02 +03:00
Melan
18f86e41f6
Removed two unused imports
2022-10-24 17:21:18 +02:00
AUTOMATIC
7d6b388d71
Merge branch 'ae'
2022-10-21 13:35:01 +03:00
Melan
8f59129847
Some changes to the tensorboard code and hypernetwork support
2022-10-20 22:37:16 +02:00
Melan
a6d593a6b5
Fixed a typo in a variable
2022-10-20 19:43:21 +02:00
Melan
29e74d6e71
Add support for Tensorboard for training embeddings
2022-10-20 16:26:16 +02:00
DepFA
0087079c2d
allow overwrite old embedding
2022-10-20 00:10:59 +01:00
MalumaDev
1997ccff13
Merge branch 'master' into test_resolve_conflicts
2022-10-18 08:55:08 +02:00
DepFA
62edfae257
print list of embeddings on reload
2022-10-17 08:42:17 +03:00
MalumaDev
ae0fdad64a
Merge branch 'master' into test_resolve_conflicts
2022-10-16 17:55:58 +02:00
AUTOMATIC
0c5fa9a681
do not reload embeddings from disk when doing textual inversion
2022-10-16 09:09:04 +03:00
MalumaDev
97ceaa23d0
Merge branch 'master' into test_resolve_conflicts
2022-10-16 00:06:36 +02:00
DepFA
b6e3b96dab
Change vector size footer label
2022-10-15 17:23:39 +03:00
DepFA
ddf6899df0
generalise to popular lossless formats
2022-10-15 17:23:39 +03:00
DepFA
9a1dcd78ed
add webp for embed load
2022-10-15 17:23:39 +03:00
DepFA
939f16529a
only save 1 image per embedding
2022-10-15 17:23:39 +03:00
DepFA
9e846083b7
add vector size to embed text
2022-10-15 17:23:39 +03:00
MalumaDev
7b7561f6e4
Merge branch 'master' into test_resolve_conflicts
2022-10-15 16:20:17 +02:00
AUTOMATIC
c7a86f7fe9
add option to use batch size for training
2022-10-15 09:24:59 +03:00
AUTOMATIC
03d62538ae
remove duplicate code for log loss, add step, make it read from options rather than gradio input
2022-10-14 22:43:55 +03:00
AUTOMATIC
326fe7d44b
Merge remote-tracking branch 'Melanpan/master'
2022-10-14 22:14:50 +03:00
AUTOMATIC
c344ba3b32
add option to read generation params for learning previews from txt2img
2022-10-14 20:31:49 +03:00
MalumaDev
bb57f30c2d
init
2022-10-14 10:56:41 +02:00
Melan
8636b50aea
Add learn_rate to csv and removed a left-over debug statement
2022-10-13 12:37:58 +02:00
Melan
1cfc2a1898
Save a csv containing the loss while training
2022-10-12 23:36:29 +02:00
AUTOMATIC
c3c8eef9fd
train: change filename processing to be more simple and configurable
...
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
2022-10-12 20:49:47 +03:00
AUTOMATIC1111
cc5803603b
Merge pull request #2037 from AUTOMATIC1111/embed-embeddings-in-images
...
Add option to store TI embeddings in png chunks, and load from same.
2022-10-12 15:59:24 +03:00
DepFA
10a2de644f
formatting
2022-10-12 13:15:35 +01:00
DepFA
5f3317376b
spacing
2022-10-11 20:09:49 +01:00
DepFA
91d7ee0d09
update imports
2022-10-11 20:09:10 +01:00
DepFA
aa75d5cfe8
correct conflict resolution typo
2022-10-11 20:06:13 +01:00
AUTOMATIC
d6fcc6b87b
apply lr schedule to hypernets
2022-10-11 22:03:05 +03:00
DepFA
61788c0538
shift embedding logic out of textual_inversion
2022-10-11 19:50:50 +01:00
AUTOMATIC1111
419e539fe3
Merge branch 'learning_rate-scheduling' into learnschedule
2022-10-11 21:50:19 +03:00
AUTOMATIC
d4ea5f4d86
add an option to unload models during hypernetwork training to save VRAM
2022-10-11 19:03:08 +03:00
AUTOMATIC1111
4f96ffd0b5
Merge pull request #2201 from alg-wiki/textual__inversion
...
Textual Inversion: Preprocess and Training will only pick-up image files instead
2022-10-11 17:25:36 +03:00
DepFA
1eaad95533
Merge branch 'master' into embed-embeddings-in-images
2022-10-11 15:15:09 +01:00
AUTOMATIC
530103b586
fixes related to merge
2022-10-11 14:53:02 +03:00
alg-wiki
8bacbca0a1
Removed my local edits to checkpoint image generation
2022-10-11 17:35:09 +09:00
alg-wiki
b2368a3bce
Switched to exception handling
2022-10-11 17:32:46 +09:00
DepFA
7aa8fcac1e
use simple lcg in xor
2022-10-11 04:17:36 +01:00
DepFA
e0fbe6d27e
colour depth conversion fix
2022-10-10 23:26:24 +01:00
DepFA
767202a4c3
add dependency
2022-10-10 23:20:52 +01:00
DepFA
315d5a8ed9
update data dis[play style
2022-10-10 23:14:44 +01:00
alg-wiki
907a88b2d0
Added .webp .bmp
2022-10-11 06:35:07 +09:00
Fampai
2536ecbb17
Refactored learning rate code
2022-10-10 17:10:29 -04:00
alg-wiki
f0ab972f85
Merge branch 'master' into textual__inversion
2022-10-11 03:35:28 +08:00
alg-wiki
bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files
2022-10-11 04:30:13 +09:00
DepFA
df6d0d9286
convert back to rgb as some hosts add alpha
2022-10-10 15:43:09 +01:00