Commit Graph

56 Commits

Author SHA1 Message Date
Fampai 1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2 017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
AUTOMATIC cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
AUTOMATIC 27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2 970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2 26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
MrCheeze 5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
AUTOMATIC 77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
AUTOMATIC 4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
AUTOMATIC 706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
C43H66N12O12S2 91d66f5520
use new attnblock for xformers path 2022-10-08 11:56:01 +03:00
C43H66N12O12S2 b70eaeb200
delete broken and unnecessary aliases 2022-10-08 04:10:35 +03:00
AUTOMATIC f7c787eb7c make it possible to use hypernetworks without opt split attention 2022-10-07 16:39:51 +03:00
C43H66N12O12S2 5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2 5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2 35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2 2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
Jairo Correa ad0cc85d1f Merge branch 'master' into stable 2022-10-02 18:31:19 -03:00
AUTOMATIC 88ec0cf557 fix for incorrect embedding token length calculation (will break seeds that use embeddings, you're welcome!)
add option to input initialization text for embeddings
2022-10-02 19:40:51 +03:00
AUTOMATIC 820f1dc96b initial support for training textual inversion 2022-10-02 15:03:39 +03:00
Jairo Correa ad1fbbae93 Merge branch 'master' into fix-vram 2022-09-30 18:58:51 -03:00
AUTOMATIC 98cc6c6e74 add embeddings dir 2022-09-30 14:16:26 +03:00
AUTOMATIC c715ef04d1 fix for incorrect model weight loading for #814 2022-09-29 15:40:28 +03:00
AUTOMATIC c1c27dad3b new implementation for attention/emphasis 2022-09-29 11:31:48 +03:00
Jairo Correa c2d5b29040 Move silu to sd_hijack 2022-09-29 01:16:25 -03:00
Liam e5707b66d6 switched the token counter to use hidden buttons instead of api call 2022-09-27 19:29:53 -04:00
Liam 5034f7d759 added token counter next to txt2img and img2img prompts 2022-09-27 15:56:18 -04:00
AUTOMATIC 073f6eac22 potential fix for embeddings no loading on AMD cards 2022-09-25 15:04:39 +03:00
guaneec 615b2fc9ce Fix token max length 2022-09-25 09:30:02 +03:00
AUTOMATIC 254da5d127 --opt-split-attention now on by default for torch.cuda, off for others (cpu and MPS; because the option does not work there according to reports) 2022-09-21 09:49:02 +03:00
AUTOMATIC 1578859305 fix for too large embeddings causing an error 2022-09-21 00:20:11 +03:00
AUTOMATIC 90401d96a6 fix a off by one error with embedding at the start of the sentence 2022-09-20 12:12:31 +03:00
AUTOMATIC ab38392119 add the part that was missing for word textual inversion checksums 2022-09-20 09:53:29 +03:00
AUTOMATIC cae5c5fa8d Making opt split attention the default. Are you upset about this? Sorry. 2022-09-18 20:55:46 +03:00
C43H66N12O12S2 18d6fe4346
..... 2022-09-18 01:21:50 +03:00
C43H66N12O12S2 d63dbb3acc
Move scale multiplication to the front 2022-09-18 01:05:31 +03:00
C43H66N12O12S2 72d7f8c761 fix typo 2022-09-15 14:14:27 +03:00
C43H66N12O12S2 7ec6282ec2 pass dtype to torch.zeros as well 2022-09-15 14:14:27 +03:00
C43H66N12O12S2 3b1b1444d4
Complete cross attention update 2022-09-13 14:29:56 +03:00
C43H66N12O12S2 aaea8b4494
Update cross attention to the newest version 2022-09-12 16:48:21 +03:00
AUTOMATIC 06fadd2dc5 added --opt-split-attention-v1 2022-09-11 00:29:10 +03:00
AUTOMATIC c92f2ff196 Update to cross attention from https://github.com/Doggettx/stable-diffusion #219 2022-09-10 12:06:19 +03:00
AUTOMATIC 62ce77e245 support for sd-concepts as alternatives for textual inversion #151 2022-09-08 15:36:50 +03:00