Commit Graph

922 Commits

Author SHA1 Message Date
AUTOMATIC d682444ecc add option to select hypernetwork modules when creating 2022-10-11 18:04:47 +03:00
AUTOMATIC1111 4f96ffd0b5
Merge pull request #2201 from alg-wiki/textual__inversion
Textual Inversion: Preprocess and Training will only pick-up image files instead
2022-10-11 17:25:36 +03:00
brkirch 861db783c7 Use apply_hypernetwork function 2022-10-11 17:24:00 +03:00
brkirch 574c8e554a Add InvokeAI and lstein to credits, add back CUDA support 2022-10-11 17:24:00 +03:00
brkirch 98fd5cde72 Add check for psutil 2022-10-11 17:24:00 +03:00
brkirch c0484f1b98 Add cross-attention optimization from InvokeAI
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
2022-10-11 17:24:00 +03:00
AUTOMATIC1111 f7e86aa420
Merge pull request #2227 from papuSpartan/master
Refresh list of models/ckpts upon hitting restart gradio in the setti…
2022-10-11 17:15:19 +03:00
DepFA 1eaad95533
Merge branch 'master' into embed-embeddings-in-images 2022-10-11 15:15:09 +01:00
AUTOMATIC 66b7d7584f become even stricter with pickles
no pickle shall pass
thank you again, RyotaK
2022-10-11 17:03:16 +03:00
papuSpartan d01a2d0156 move list refresh to webui.py and add stdout indicating it's doing so 2022-10-11 08:31:28 -05:00
不会画画的中医不是好程序员 a36dea9596
Merge branch 'master' into master 2022-10-11 21:03:41 +08:00
AUTOMATIC b0583be088 more renames 2022-10-11 15:54:34 +03:00
AUTOMATIC 873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
JamnedZ a004d1a855 Added new line at the end of ngrok.py 2022-10-11 15:38:53 +03:00
JamnedZ 5992564448 Cleaned ngrok integration 2022-10-11 15:38:53 +03:00
Ben 861297cefe add a space holder 2022-10-11 15:37:04 +03:00
Ben 87b77cad5f Layout fix 2022-10-11 15:37:04 +03:00
yfszzx 87d63bbab5 images history improvement 2022-10-11 20:37:03 +08:00
Martin Cairns eacc03b167 Fix typo in comments 2022-10-11 15:36:29 +03:00
Martin Cairns 1eae307607 Remove debug code for checking that first sigma value is same after code cleanup 2022-10-11 15:36:29 +03:00
Martin Cairns 92d7a13885 Handle different parameters for DPM fast & adaptive 2022-10-11 15:36:29 +03:00
yfszzx 594ab4ba53 images history improvement 2022-10-11 20:23:41 +08:00
yfszzx 7b1db45e1f images history improvement 2022-10-11 20:17:27 +08:00
AUTOMATIC 530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
alg-wiki 8bacbca0a1
Removed my local edits to checkpoint image generation 2022-10-11 17:35:09 +09:00
alg-wiki b2368a3bce
Switched to exception handling 2022-10-11 17:32:46 +09:00
AUTOMATIC 5de806184f Merge branch 'master' into hypernetwork-training 2022-10-11 11:14:36 +03:00
AUTOMATIC 948533950c replace duplicate code with a function 2022-10-11 11:10:17 +03:00
hentailord85ez 5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
2022-10-11 09:55:28 +03:00
Kenneth 8617396c6d Added slider for deepbooru score threshold in settings 2022-10-11 09:43:16 +03:00
Jairo Correa 8b7d3f1bef Make the ctrl+enter shortcut use the generate button on the current tab 2022-10-11 09:32:03 +03:00
DepFA 7aa8fcac1e
use simple lcg in xor 2022-10-11 04:17:36 +01:00
papuSpartan 1add3cff84 Refresh list of models/ckpts upon hitting restart gradio in the settings pane 2022-10-10 19:57:43 -05:00
JC_Array bb932dbf9f added alpha sort and threshold variables to create process method in preprocessing 2022-10-10 18:37:52 -05:00
JC-Array 47f5e216da
Merge branch 'deepdanbooru_pre_process' into master 2022-10-10 18:10:49 -05:00
JC_Array 76ef3d75f6 added deepbooru settings (threshold and sort by alpha or likelyhood) 2022-10-10 18:01:49 -05:00
DepFA e0fbe6d27e
colour depth conversion fix 2022-10-10 23:26:24 +01:00
DepFA 767202a4c3
add dependency 2022-10-10 23:20:52 +01:00
DepFA 315d5a8ed9
update data dis[play style 2022-10-10 23:14:44 +01:00
JC_Array b980e7188c corrected tag return in get_deepbooru_tags 2022-10-10 16:52:54 -05:00
JC_Array a1a05ad2d1 import time missing, added to deepbooru fixxing error on get_deepbooru_tags 2022-10-10 16:47:58 -05:00
alg-wiki 907a88b2d0 Added .webp .bmp 2022-10-11 06:35:07 +09:00
Fampai 2536ecbb17 Refactored learning rate code 2022-10-10 17:10:29 -04:00
AUTOMATIC f98338faa8 add an option to not add watermark to created images 2022-10-10 23:15:48 +03:00
alg-wiki f0ab972f85
Merge branch 'master' into textual__inversion 2022-10-11 03:35:28 +08:00
alg-wiki bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 2022-10-11 04:30:13 +09:00
Justin Maier 1d64976dbc Simplify crop logic 2022-10-10 12:04:21 -06:00
AUTOMATIC 727e4d1086 no to different messages plus fix using != to compare to None 2022-10-10 20:46:55 +03:00
AUTOMATIC1111 b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
AUTOMATIC 39919c40dd add eta noise seed delta option 2022-10-10 20:32:44 +03:00
ssysm af62ad4d25 change vae loading method 2022-10-10 13:25:28 -04:00
C43H66N12O12S2 ed769977f0 add swinir v2 support 2022-10-10 19:54:57 +03:00
C43H66N12O12S2 ece27fe989 Add files via upload 2022-10-10 19:54:57 +03:00
C43H66N12O12S2 3e7a981194 remove functorch 2022-10-10 19:54:07 +03:00
C43H66N12O12S2 623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
Vladimir Repin 9d33baba58 Always show previous mask and fix extras_send dest 2022-10-10 19:39:24 +03:00
hentailord85ez d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez 460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
RW21 f347ddfd80 Remove max_batch_count from ui.py 2022-10-10 18:53:40 +03:00
DepFA df6d0d9286
convert back to rgb as some hosts add alpha 2022-10-10 15:43:09 +01:00
DepFA 707a431100
add pixel data footer 2022-10-10 15:34:49 +01:00
DepFA ce2d7f7eac
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 15:13:48 +01:00
alg-wiki 7a20f914ed Custom Width and Height 2022-10-10 17:05:12 +03:00
alg-wiki 6ad3a53e36 Fixed progress bar output for epoch 2022-10-10 17:05:12 +03:00
alg-wiki ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:05:12 +03:00
AUTOMATIC 8f1efdc130 --no-half-vae pt2 2022-10-10 17:03:45 +03:00
alg-wiki 04c745ea4f
Custom Width and Height 2022-10-10 22:35:35 +09:00
AUTOMATIC 7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
不会画画的中医不是好程序员 1e18a5ffcc
Merge branch 'AUTOMATIC1111:master' into master 2022-10-10 20:21:25 +08:00
yfszzx 23f2989799 images history over 2022-10-10 18:33:49 +08:00
JC_Array 2f94331df2 removed change in last commit, simplified to adding the visible argument to process_caption_deepbooru and it set to False if deepdanbooru argument is not set 2022-10-10 03:34:00 -05:00
alg-wiki 4ee7519fc2
Fixed progress bar output for epoch 2022-10-10 17:31:33 +09:00
JC_Array 8ec069e64d removed duplicate run_preprocess.click by creating run_preprocess_inputs list and appending deepbooru variable to input list if in scope 2022-10-10 03:23:24 -05:00
alg-wiki 3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:07:46 +09:00
yfszzx 8a7c07a214 show image history 2022-10-10 15:39:39 +08:00
brkirch 8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
JC_Array 1f92336be7 refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing. 2022-10-09 23:58:18 -05:00
ssysm 6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00
Justin Maier 6435691bb1 Add "Scale to" option to Extras 2022-10-09 19:26:52 -06:00
DepFA 4117afff11
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 00:38:54 +01:00
DepFA e2c2925eb4
remove braces from steps 2022-10-10 00:12:53 +01:00
DepFA d6a599ef9b
change caption method 2022-10-10 00:07:52 +01:00
DepFA 0ac3a07eec
add caption image with overlay 2022-10-10 00:05:36 +01:00
DepFA 01fd9cf0d2
change source of step count 2022-10-09 22:17:02 +01:00
DepFA 96f1e6be59
source checkpoint hash from current checkpoint 2022-10-09 22:14:50 +01:00
DepFA 6684610510
correct case on embeddingFromB64 2022-10-09 22:06:42 +01:00
DepFA d0184b8f76
change json tensor key name 2022-10-09 22:06:12 +01:00
DepFA 5d12ec82d3
add encoder and decoder classes 2022-10-09 22:05:09 +01:00
DepFA 969bd8256e
add alternate checkpoint hash source 2022-10-09 22:02:28 +01:00
DepFA 03694e1f99
add embedding load and save from b64 json 2022-10-09 21:58:14 +01:00
AUTOMATIC a65476718f add DoubleStorage to list of allowed classes for pickle 2022-10-09 23:38:49 +03:00
DepFA fa0c5eb81b
Add pretty image captioning functions 2022-10-09 20:41:22 +01:00
AUTOMATIC 8d340cfb88 do not add clip skip to parameters if it's 1 or 0 2022-10-09 22:31:35 +03:00
Fampai 1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai ec2bd9be75 Fix issues with CLIP ignore option name change 2022-10-09 22:31:23 +03:00
Fampai a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 2022-10-09 22:31:23 +03:00
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
AUTOMATIC 6c383d2e82 show model selection setting on top of page 2022-10-09 22:24:07 +03:00
Artem Zagidulin 9ecea0a8d6 fix missing png info when Extras Batch Process 2022-10-09 18:35:25 +03:00
AUTOMATIC 875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 2022-10-09 17:58:43 +03:00
AUTOMATIC 9d1138e294 fix typo in filename for ESRGAN arch 2022-10-09 15:08:27 +03:00
AUTOMATIC e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 2022-10-09 14:57:48 +03:00
William Moorehouse 594cbfd8fb Sanitize infotext output (for now) 2022-10-09 14:49:15 +03:00
William Moorehouse 006791c13d Fix grabbing the model name for infotext 2022-10-09 14:49:15 +03:00
William Moorehouse d6d10a37bf Added extended model details to infotext 2022-10-09 14:49:15 +03:00
AUTOMATIC 542a3d3a4a fix btoken hypernetworks in XY plot 2022-10-09 14:33:22 +03:00
AUTOMATIC 77a719648d fix logic error in #1832 2022-10-09 13:48:04 +03:00
AUTOMATIC f4578b343d fix model switching not working properly if there is a different yaml config 2022-10-09 13:23:30 +03:00
AUTOMATIC bd833409ac additional changes for saving pnginfo for #1803 2022-10-09 13:10:15 +03:00
Milly 0609ce06c0 Removed duplicate definition model_path 2022-10-09 12:46:07 +03:00
AUTOMATIC 6f6798ddab prevent a possible code execution error (thanks, RyotaK) 2022-10-09 12:33:37 +03:00
AUTOMATIC 0241d811d2 Revert "Fix for Prompts_from_file showing extra textbox."
This reverts commit e2930f9821.
2022-10-09 12:04:44 +03:00
AUTOMATIC ab4fe4f44c hide filenames for save button by default 2022-10-09 11:59:41 +03:00
Tony Beeman cbf6dad02d Handle case where on_show returns the wrong number of arguments 2022-10-09 11:16:38 +03:00
Tony Beeman 86cb16886f Pull Request Code Review Fixes 2022-10-09 11:16:38 +03:00
Tony Beeman e2930f9821 Fix for Prompts_from_file showing extra textbox. 2022-10-09 11:16:38 +03:00
Nicolas Noullet 1ffeb42d38 Fix typo 2022-10-09 11:10:13 +03:00
frostydad ef93acdc73 remove line break 2022-10-09 11:09:17 +03:00
frostydad 03e570886f Fix incorrect sampler name in output 2022-10-09 11:09:17 +03:00
Fampai 122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC1111 e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann 14192c5b20 Support `Download` for txt files. 2022-10-09 10:49:11 +03:00
aoirusann 5ab7e88d9b Add `Download` & `Download as zip` 2022-10-09 10:49:11 +03:00
AUTOMATIC 4e569fd888 fixed incorrect message about loading config; thanks anon! 2022-10-09 10:31:47 +03:00
AUTOMATIC c77c89cc83 make main model loading and model merger use the same code 2022-10-09 10:23:31 +03:00
DepFA cd8673bd9b
add embed embedding to ui 2022-10-09 05:40:57 +01:00
DepFA 5841990b0d
Update textual_inversion.py 2022-10-09 05:38:38 +01:00
AUTOMATIC 050a6a798c support loading .yaml config with same name as model
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland 432782163a chore: Fix typos 2022-10-08 22:42:30 +03:00
Edouard Leurent 610a7f4e14 Break after finding the local directory of stable diffusion
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC 3b2141c5fb add 'Ignore last layers of CLIP model' option as a parameter to the infotext 2022-10-08 22:21:15 +03:00
AUTOMATIC e6e42f98df make --force-enable-xformers work without needing --xformers 2022-10-08 22:12:23 +03:00
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
DepFA b458fa48fe Update ui.py 2022-10-08 20:38:35 +03:00
DepFA 15c4278f1a TI preprocess wording
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle 0ec80f0125
Merge branch 'master' into dev/deepdanbooru 2022-10-08 18:28:22 +02:00
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
AUTOMATIC f9c5da1592 add fallback for xformers_attnblock_forward 2022-10-08 19:05:19 +03:00
Greendayle 01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
Artem Zagidulin a5550f0213 alternate prompt 2022-10-08 18:12:19 +03:00
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2 017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
Greendayle 5329d0aba0 Merge branch 'master' into dev/deepdanbooru 2022-10-08 16:30:28 +02:00
AUTOMATIC cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
Greendayle 2e8ba0fa47 fix conflicts 2022-10-08 16:27:48 +02:00
Milly 4f33289d0f Fixed typo 2022-10-08 17:15:30 +03:00
AUTOMATIC 27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC 7ff1170a2e emergency fix for xformers (continue + shared) 2022-10-08 16:33:39 +03:00
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2 970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2 69d0053583
update sd_hijack_opt to respect new env variables 2022-10-08 16:21:40 +03:00
C43H66N12O12S2 ddfa9a9786
add xformers_available shared variable 2022-10-08 16:20:41 +03:00
C43H66N12O12S2 26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
MrCheeze 5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
ddPn08 772db721a5 fix glob path in hypernetwork.py 2022-10-08 15:46:54 +03:00
AUTOMATIC 7001bffe02 fix AND broken for long prompts 2022-10-08 15:43:25 +03:00
AUTOMATIC 77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
AUTOMATIC 4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
Trung Ngo 00117a07ef check specifically for skipped 2022-10-08 13:40:39 +03:00
Trung Ngo 786d9f63aa Add button to skip the current iteration 2022-10-08 13:40:39 +03:00
AUTOMATIC 45cc0ce3c4 Merge remote-tracking branch 'origin/master' 2022-10-08 13:39:08 +03:00
AUTOMATIC 706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
leko 616b7218f7 fix: handles when state_dict does not exist 2022-10-08 12:38:50 +03:00
C43H66N12O12S2 91d66f5520
use new attnblock for xformers path 2022-10-08 11:56:01 +03:00
C43H66N12O12S2 76a616fa6b
Update sd_hijack_optimizations.py 2022-10-08 11:55:38 +03:00
C43H66N12O12S2 5d54f35c58
add xformers attnblock and hypernetwork support 2022-10-08 11:55:02 +03:00
brkirch f2055cb1d4 Add hypernetwork support to split cross attention v1
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
C43H66N12O12S2 b70eaeb200
delete broken and unnecessary aliases 2022-10-08 04:10:35 +03:00
C43H66N12O12S2 c9cc65b201
switch to the proper way of calling xformers 2022-10-08 04:09:18 +03:00
AUTOMATIC 12c4d5c6b5 hypernetwork training mk1 2022-10-07 23:22:22 +03:00
Greendayle 5f12e7efd9 linux test 2022-10-07 20:58:30 +02:00
Greendayle fa2ea648db even more powerfull fix 2022-10-07 20:46:38 +02:00
Greendayle 54fa613c83 loading tf only in interrogation process 2022-10-07 20:37:43 +02:00
Greendayle 537da7a304 Merge branch 'master' into dev/deepdanbooru 2022-10-07 18:31:49 +02:00
AUTOMATIC f7c787eb7c make it possible to use hypernetworks without opt split attention 2022-10-07 16:39:51 +03:00
AUTOMATIC 97bc0b9504 do not stop working on failed hypernetwork load 2022-10-07 13:22:50 +03:00
AUTOMATIC d15b3ec001 support loading VAE 2022-10-07 10:40:22 +03:00
AUTOMATIC bad7cb29ce added support for hypernetworks (???) 2022-10-07 10:17:52 +03:00
C43H66N12O12S2 5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2 5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2 35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2 da4ab2707b
Update shared.py 2022-10-07 05:23:06 +03:00
C43H66N12O12S2 2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
C43H66N12O12S2 f174fb2922
add xformers attention 2022-10-07 05:21:49 +03:00
AUTOMATIC b34b25b4c9 karras samplers for img2img? 2022-10-06 23:27:01 +03:00
Milly 405c8171d1 Prefer using `Processed.sd_model_hash` attribute when filename pattern 2022-10-06 20:41:23 +03:00
Milly 1cc36d170a Added job_timestamp to Processed
So `[job_timestamp]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly 070b7d60cf Added styles to Processed
So `[styles]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly cf7c784fcc Removed duplicate defined models_path
Use `modules.paths.models_path` instead `modules.shared.model_path`.
2022-10-06 20:29:12 +03:00
AUTOMATIC dbc8a4d351 add generation parameters to images shown in web ui 2022-10-06 20:27:50 +03:00
Milly 0bb458f0ca Removed duplicate image saving codes
Use `modules.images.save_image()` instead.
2022-10-06 20:15:39 +03:00
Jairo Correa b66aa334a9 Merge branch 'master' into fix-vram 2022-10-06 13:41:37 -03:00
DepFA fec71e4de2 Default window title progress updates on 2022-10-06 17:58:52 +03:00
DepFA be71115b1a Update shared.py 2022-10-06 17:58:52 +03:00
AUTOMATIC 5993df24a1 integrate the new samplers PR 2022-10-06 14:12:52 +03:00
C43H66N12O12S2 3ddf80a9db add variant setting 2022-10-06 13:42:21 +03:00