Commit Graph

815 Commits

Author SHA1 Message Date
C43H66N12O12S2 623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
Vladimir Repin 9d33baba58 Always show previous mask and fix extras_send dest 2022-10-10 19:39:24 +03:00
hentailord85ez d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez 460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
RW21 f347ddfd80 Remove max_batch_count from ui.py 2022-10-10 18:53:40 +03:00
DepFA df6d0d9286
convert back to rgb as some hosts add alpha 2022-10-10 15:43:09 +01:00
DepFA 707a431100
add pixel data footer 2022-10-10 15:34:49 +01:00
DepFA ce2d7f7eac
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 15:13:48 +01:00
alg-wiki 7a20f914ed Custom Width and Height 2022-10-10 17:05:12 +03:00
alg-wiki 6ad3a53e36 Fixed progress bar output for epoch 2022-10-10 17:05:12 +03:00
alg-wiki ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:05:12 +03:00
AUTOMATIC 8f1efdc130 --no-half-vae pt2 2022-10-10 17:03:45 +03:00
alg-wiki 04c745ea4f
Custom Width and Height 2022-10-10 22:35:35 +09:00
AUTOMATIC 7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
JC_Array 2f94331df2 removed change in last commit, simplified to adding the visible argument to process_caption_deepbooru and it set to False if deepdanbooru argument is not set 2022-10-10 03:34:00 -05:00
alg-wiki 4ee7519fc2
Fixed progress bar output for epoch 2022-10-10 17:31:33 +09:00
JC_Array 8ec069e64d removed duplicate run_preprocess.click by creating run_preprocess_inputs list and appending deepbooru variable to input list if in scope 2022-10-10 03:23:24 -05:00
alg-wiki 3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:07:46 +09:00
brkirch 8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
JC_Array 1f92336be7 refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing. 2022-10-09 23:58:18 -05:00
ssysm 6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00
Justin Maier 6435691bb1 Add "Scale to" option to Extras 2022-10-09 19:26:52 -06:00
DepFA 4117afff11
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 00:38:54 +01:00
DepFA e2c2925eb4
remove braces from steps 2022-10-10 00:12:53 +01:00
DepFA d6a599ef9b
change caption method 2022-10-10 00:07:52 +01:00
DepFA 0ac3a07eec
add caption image with overlay 2022-10-10 00:05:36 +01:00
DepFA 01fd9cf0d2
change source of step count 2022-10-09 22:17:02 +01:00
DepFA 96f1e6be59
source checkpoint hash from current checkpoint 2022-10-09 22:14:50 +01:00
DepFA 6684610510
correct case on embeddingFromB64 2022-10-09 22:06:42 +01:00
DepFA d0184b8f76
change json tensor key name 2022-10-09 22:06:12 +01:00
DepFA 5d12ec82d3
add encoder and decoder classes 2022-10-09 22:05:09 +01:00
DepFA 969bd8256e
add alternate checkpoint hash source 2022-10-09 22:02:28 +01:00
DepFA 03694e1f99
add embedding load and save from b64 json 2022-10-09 21:58:14 +01:00
AUTOMATIC a65476718f add DoubleStorage to list of allowed classes for pickle 2022-10-09 23:38:49 +03:00
DepFA fa0c5eb81b
Add pretty image captioning functions 2022-10-09 20:41:22 +01:00
AUTOMATIC 8d340cfb88 do not add clip skip to parameters if it's 1 or 0 2022-10-09 22:31:35 +03:00
Fampai 1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai ec2bd9be75 Fix issues with CLIP ignore option name change 2022-10-09 22:31:23 +03:00
Fampai a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 2022-10-09 22:31:23 +03:00
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
AUTOMATIC 6c383d2e82 show model selection setting on top of page 2022-10-09 22:24:07 +03:00
Artem Zagidulin 9ecea0a8d6 fix missing png info when Extras Batch Process 2022-10-09 18:35:25 +03:00
AUTOMATIC 875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 2022-10-09 17:58:43 +03:00
AUTOMATIC 9d1138e294 fix typo in filename for ESRGAN arch 2022-10-09 15:08:27 +03:00
AUTOMATIC e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 2022-10-09 14:57:48 +03:00
William Moorehouse 594cbfd8fb Sanitize infotext output (for now) 2022-10-09 14:49:15 +03:00
William Moorehouse 006791c13d Fix grabbing the model name for infotext 2022-10-09 14:49:15 +03:00
William Moorehouse d6d10a37bf Added extended model details to infotext 2022-10-09 14:49:15 +03:00
AUTOMATIC 542a3d3a4a fix btoken hypernetworks in XY plot 2022-10-09 14:33:22 +03:00
AUTOMATIC 77a719648d fix logic error in #1832 2022-10-09 13:48:04 +03:00
AUTOMATIC f4578b343d fix model switching not working properly if there is a different yaml config 2022-10-09 13:23:30 +03:00
AUTOMATIC bd833409ac additional changes for saving pnginfo for #1803 2022-10-09 13:10:15 +03:00
Milly 0609ce06c0 Removed duplicate definition model_path 2022-10-09 12:46:07 +03:00
AUTOMATIC 6f6798ddab prevent a possible code execution error (thanks, RyotaK) 2022-10-09 12:33:37 +03:00
AUTOMATIC 0241d811d2 Revert "Fix for Prompts_from_file showing extra textbox."
This reverts commit e2930f9821.
2022-10-09 12:04:44 +03:00
AUTOMATIC ab4fe4f44c hide filenames for save button by default 2022-10-09 11:59:41 +03:00
Tony Beeman cbf6dad02d Handle case where on_show returns the wrong number of arguments 2022-10-09 11:16:38 +03:00
Tony Beeman 86cb16886f Pull Request Code Review Fixes 2022-10-09 11:16:38 +03:00
Tony Beeman e2930f9821 Fix for Prompts_from_file showing extra textbox. 2022-10-09 11:16:38 +03:00
Nicolas Noullet 1ffeb42d38 Fix typo 2022-10-09 11:10:13 +03:00
frostydad ef93acdc73 remove line break 2022-10-09 11:09:17 +03:00
frostydad 03e570886f Fix incorrect sampler name in output 2022-10-09 11:09:17 +03:00
Fampai 122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC1111 e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann 14192c5b20 Support `Download` for txt files. 2022-10-09 10:49:11 +03:00
aoirusann 5ab7e88d9b Add `Download` & `Download as zip` 2022-10-09 10:49:11 +03:00
AUTOMATIC 4e569fd888 fixed incorrect message about loading config; thanks anon! 2022-10-09 10:31:47 +03:00
AUTOMATIC c77c89cc83 make main model loading and model merger use the same code 2022-10-09 10:23:31 +03:00
DepFA cd8673bd9b
add embed embedding to ui 2022-10-09 05:40:57 +01:00
DepFA 5841990b0d
Update textual_inversion.py 2022-10-09 05:38:38 +01:00
AUTOMATIC 050a6a798c support loading .yaml config with same name as model
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland 432782163a chore: Fix typos 2022-10-08 22:42:30 +03:00
Edouard Leurent 610a7f4e14 Break after finding the local directory of stable diffusion
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC 3b2141c5fb add 'Ignore last layers of CLIP model' option as a parameter to the infotext 2022-10-08 22:21:15 +03:00
AUTOMATIC e6e42f98df make --force-enable-xformers work without needing --xformers 2022-10-08 22:12:23 +03:00
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
DepFA b458fa48fe Update ui.py 2022-10-08 20:38:35 +03:00
DepFA 15c4278f1a TI preprocess wording
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle 0ec80f0125
Merge branch 'master' into dev/deepdanbooru 2022-10-08 18:28:22 +02:00
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
AUTOMATIC f9c5da1592 add fallback for xformers_attnblock_forward 2022-10-08 19:05:19 +03:00
Greendayle 01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
Artem Zagidulin a5550f0213 alternate prompt 2022-10-08 18:12:19 +03:00
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2 017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
Greendayle 5329d0aba0 Merge branch 'master' into dev/deepdanbooru 2022-10-08 16:30:28 +02:00
AUTOMATIC cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
Greendayle 2e8ba0fa47 fix conflicts 2022-10-08 16:27:48 +02:00
Milly 4f33289d0f Fixed typo 2022-10-08 17:15:30 +03:00
AUTOMATIC 27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC 7ff1170a2e emergency fix for xformers (continue + shared) 2022-10-08 16:33:39 +03:00
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2 970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2 69d0053583
update sd_hijack_opt to respect new env variables 2022-10-08 16:21:40 +03:00
C43H66N12O12S2 ddfa9a9786
add xformers_available shared variable 2022-10-08 16:20:41 +03:00
C43H66N12O12S2 26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
MrCheeze 5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
ddPn08 772db721a5 fix glob path in hypernetwork.py 2022-10-08 15:46:54 +03:00
AUTOMATIC 7001bffe02 fix AND broken for long prompts 2022-10-08 15:43:25 +03:00
AUTOMATIC 77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
AUTOMATIC 4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
Trung Ngo 00117a07ef check specifically for skipped 2022-10-08 13:40:39 +03:00
Trung Ngo 786d9f63aa Add button to skip the current iteration 2022-10-08 13:40:39 +03:00
AUTOMATIC 45cc0ce3c4 Merge remote-tracking branch 'origin/master' 2022-10-08 13:39:08 +03:00
AUTOMATIC 706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
leko 616b7218f7 fix: handles when state_dict does not exist 2022-10-08 12:38:50 +03:00
C43H66N12O12S2 91d66f5520
use new attnblock for xformers path 2022-10-08 11:56:01 +03:00
C43H66N12O12S2 76a616fa6b
Update sd_hijack_optimizations.py 2022-10-08 11:55:38 +03:00
C43H66N12O12S2 5d54f35c58
add xformers attnblock and hypernetwork support 2022-10-08 11:55:02 +03:00
brkirch f2055cb1d4 Add hypernetwork support to split cross attention v1
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
C43H66N12O12S2 b70eaeb200
delete broken and unnecessary aliases 2022-10-08 04:10:35 +03:00
C43H66N12O12S2 c9cc65b201
switch to the proper way of calling xformers 2022-10-08 04:09:18 +03:00
AUTOMATIC 12c4d5c6b5 hypernetwork training mk1 2022-10-07 23:22:22 +03:00
Greendayle 5f12e7efd9 linux test 2022-10-07 20:58:30 +02:00
Greendayle fa2ea648db even more powerfull fix 2022-10-07 20:46:38 +02:00
Greendayle 54fa613c83 loading tf only in interrogation process 2022-10-07 20:37:43 +02:00
Greendayle 537da7a304 Merge branch 'master' into dev/deepdanbooru 2022-10-07 18:31:49 +02:00
AUTOMATIC f7c787eb7c make it possible to use hypernetworks without opt split attention 2022-10-07 16:39:51 +03:00
AUTOMATIC 97bc0b9504 do not stop working on failed hypernetwork load 2022-10-07 13:22:50 +03:00
AUTOMATIC d15b3ec001 support loading VAE 2022-10-07 10:40:22 +03:00
AUTOMATIC bad7cb29ce added support for hypernetworks (???) 2022-10-07 10:17:52 +03:00
C43H66N12O12S2 5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2 5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2 35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2 da4ab2707b
Update shared.py 2022-10-07 05:23:06 +03:00
C43H66N12O12S2 2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
C43H66N12O12S2 f174fb2922
add xformers attention 2022-10-07 05:21:49 +03:00
AUTOMATIC b34b25b4c9 karras samplers for img2img? 2022-10-06 23:27:01 +03:00
Milly 405c8171d1 Prefer using `Processed.sd_model_hash` attribute when filename pattern 2022-10-06 20:41:23 +03:00
Milly 1cc36d170a Added job_timestamp to Processed
So `[job_timestamp]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly 070b7d60cf Added styles to Processed
So `[styles]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly cf7c784fcc Removed duplicate defined models_path
Use `modules.paths.models_path` instead `modules.shared.model_path`.
2022-10-06 20:29:12 +03:00
AUTOMATIC dbc8a4d351 add generation parameters to images shown in web ui 2022-10-06 20:27:50 +03:00
Milly 0bb458f0ca Removed duplicate image saving codes
Use `modules.images.save_image()` instead.
2022-10-06 20:15:39 +03:00
Jairo Correa b66aa334a9 Merge branch 'master' into fix-vram 2022-10-06 13:41:37 -03:00
DepFA fec71e4de2 Default window title progress updates on 2022-10-06 17:58:52 +03:00
DepFA be71115b1a Update shared.py 2022-10-06 17:58:52 +03:00
AUTOMATIC 5993df24a1 integrate the new samplers PR 2022-10-06 14:12:52 +03:00
C43H66N12O12S2 3ddf80a9db add variant setting 2022-10-06 13:42:21 +03:00
C43H66N12O12S2 71901b3d3b add karras scheduling variants 2022-10-06 13:42:21 +03:00
AUTOMATIC 2d3ea42a2d workaround for a mysterious bug where prompt weights can't be matched 2022-10-06 13:21:12 +03:00
AUTOMATIC 5f24b7bcf4 option to let users select which samplers they want to hide 2022-10-06 12:08:59 +03:00
Raphael Stoeckli 4288e53fc2 removed unused import, fixed typo 2022-10-06 08:52:29 +03:00
Raphael Stoeckli 2499fb4e19 Add sanitizer for captions in Textual inversion 2022-10-06 08:52:29 +03:00
AUTOMATIC1111 0e92c36707
Merge pull request #1755 from AUTOMATIC1111/use-typing-list
use typing.list in prompt_parser.py for wider python version support
2022-10-06 08:50:06 +03:00
DepFA 55400c981b Set gradio-img2img-tool default to 'editor' 2022-10-06 08:46:32 +03:00