Commit Graph

64 Commits

Author SHA1 Message Date
AUTOMATIC 873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
AUTOMATIC 5de806184f Merge branch 'master' into hypernetwork-training 2022-10-11 11:14:36 +03:00
hentailord85ez 5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
2022-10-11 09:55:28 +03:00
C43H66N12O12S2 623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
hentailord85ez d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez 460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
Fampai 1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2 017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
AUTOMATIC cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
AUTOMATIC 27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2 970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2 26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
MrCheeze 5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
AUTOMATIC 77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
AUTOMATIC 4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
AUTOMATIC 706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
C43H66N12O12S2 91d66f5520
use new attnblock for xformers path 2022-10-08 11:56:01 +03:00
C43H66N12O12S2 b70eaeb200
delete broken and unnecessary aliases 2022-10-08 04:10:35 +03:00
AUTOMATIC 12c4d5c6b5 hypernetwork training mk1 2022-10-07 23:22:22 +03:00
AUTOMATIC f7c787eb7c make it possible to use hypernetworks without opt split attention 2022-10-07 16:39:51 +03:00
C43H66N12O12S2 5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2 5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2 35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2 2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
Jairo Correa ad0cc85d1f Merge branch 'master' into stable 2022-10-02 18:31:19 -03:00
AUTOMATIC 88ec0cf557 fix for incorrect embedding token length calculation (will break seeds that use embeddings, you're welcome!)
add option to input initialization text for embeddings
2022-10-02 19:40:51 +03:00
AUTOMATIC 820f1dc96b initial support for training textual inversion 2022-10-02 15:03:39 +03:00
Jairo Correa ad1fbbae93 Merge branch 'master' into fix-vram 2022-09-30 18:58:51 -03:00
AUTOMATIC 98cc6c6e74 add embeddings dir 2022-09-30 14:16:26 +03:00
AUTOMATIC c715ef04d1 fix for incorrect model weight loading for #814 2022-09-29 15:40:28 +03:00
AUTOMATIC c1c27dad3b new implementation for attention/emphasis 2022-09-29 11:31:48 +03:00
Jairo Correa c2d5b29040 Move silu to sd_hijack 2022-09-29 01:16:25 -03:00
Liam e5707b66d6 switched the token counter to use hidden buttons instead of api call 2022-09-27 19:29:53 -04:00
Liam 5034f7d759 added token counter next to txt2img and img2img prompts 2022-09-27 15:56:18 -04:00
AUTOMATIC 073f6eac22 potential fix for embeddings no loading on AMD cards 2022-09-25 15:04:39 +03:00
guaneec 615b2fc9ce Fix token max length 2022-09-25 09:30:02 +03:00
AUTOMATIC 254da5d127 --opt-split-attention now on by default for torch.cuda, off for others (cpu and MPS; because the option does not work there according to reports) 2022-09-21 09:49:02 +03:00
AUTOMATIC 1578859305 fix for too large embeddings causing an error 2022-09-21 00:20:11 +03:00
AUTOMATIC 90401d96a6 fix a off by one error with embedding at the start of the sentence 2022-09-20 12:12:31 +03:00
AUTOMATIC ab38392119 add the part that was missing for word textual inversion checksums 2022-09-20 09:53:29 +03:00
AUTOMATIC cae5c5fa8d Making opt split attention the default. Are you upset about this? Sorry. 2022-09-18 20:55:46 +03:00
C43H66N12O12S2 18d6fe4346
..... 2022-09-18 01:21:50 +03:00