Commit Graph

86 Commits

Author SHA1 Message Date
C43H66N12O12S2 5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2 5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2 35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2 2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
Jairo Correa ad0cc85d1f Merge branch 'master' into stable 2022-10-02 18:31:19 -03:00
AUTOMATIC 88ec0cf557 fix for incorrect embedding token length calculation (will break seeds that use embeddings, you're welcome!)
add option to input initialization text for embeddings
2022-10-02 19:40:51 +03:00
AUTOMATIC 820f1dc96b initial support for training textual inversion 2022-10-02 15:03:39 +03:00
Jairo Correa ad1fbbae93 Merge branch 'master' into fix-vram 2022-09-30 18:58:51 -03:00
AUTOMATIC 98cc6c6e74 add embeddings dir 2022-09-30 14:16:26 +03:00
AUTOMATIC c715ef04d1 fix for incorrect model weight loading for #814 2022-09-29 15:40:28 +03:00
AUTOMATIC c1c27dad3b new implementation for attention/emphasis 2022-09-29 11:31:48 +03:00
Jairo Correa c2d5b29040 Move silu to sd_hijack 2022-09-29 01:16:25 -03:00
Liam e5707b66d6 switched the token counter to use hidden buttons instead of api call 2022-09-27 19:29:53 -04:00
Liam 5034f7d759 added token counter next to txt2img and img2img prompts 2022-09-27 15:56:18 -04:00
AUTOMATIC 073f6eac22 potential fix for embeddings no loading on AMD cards 2022-09-25 15:04:39 +03:00
guaneec 615b2fc9ce Fix token max length 2022-09-25 09:30:02 +03:00
AUTOMATIC 254da5d127 --opt-split-attention now on by default for torch.cuda, off for others (cpu and MPS; because the option does not work there according to reports) 2022-09-21 09:49:02 +03:00
AUTOMATIC 1578859305 fix for too large embeddings causing an error 2022-09-21 00:20:11 +03:00
AUTOMATIC 90401d96a6 fix a off by one error with embedding at the start of the sentence 2022-09-20 12:12:31 +03:00
AUTOMATIC ab38392119 add the part that was missing for word textual inversion checksums 2022-09-20 09:53:29 +03:00
AUTOMATIC cae5c5fa8d Making opt split attention the default. Are you upset about this? Sorry. 2022-09-18 20:55:46 +03:00
C43H66N12O12S2 18d6fe4346
..... 2022-09-18 01:21:50 +03:00
C43H66N12O12S2 d63dbb3acc
Move scale multiplication to the front 2022-09-18 01:05:31 +03:00
C43H66N12O12S2 72d7f8c761 fix typo 2022-09-15 14:14:27 +03:00
C43H66N12O12S2 7ec6282ec2 pass dtype to torch.zeros as well 2022-09-15 14:14:27 +03:00
C43H66N12O12S2 3b1b1444d4
Complete cross attention update 2022-09-13 14:29:56 +03:00
C43H66N12O12S2 aaea8b4494
Update cross attention to the newest version 2022-09-12 16:48:21 +03:00
AUTOMATIC 06fadd2dc5 added --opt-split-attention-v1 2022-09-11 00:29:10 +03:00
AUTOMATIC c92f2ff196 Update to cross attention from https://github.com/Doggettx/stable-diffusion #219 2022-09-10 12:06:19 +03:00
AUTOMATIC 62ce77e245 support for sd-concepts as alternatives for textual inversion #151 2022-09-08 15:36:50 +03:00
xeonvs ba1124b326 directly convert list to tensor 2022-09-07 20:40:32 +02:00
xeonvs 65fbefd033 Added support for launching on Apple Silicon 2022-09-07 15:58:25 +02:00
AUTOMATIC a8a58dbac7 re-integrated tiling option as a UI element 2022-09-05 03:25:37 +03:00
AUTOMATIC f91d0c3d19 add an option to enable tiling image generation 2022-09-05 02:16:36 +03:00
AUTOMATIC 5bb126bd89 add split attention layer optimization from https://github.com/basujindal/stable-diffusion/pull/117 2022-09-05 01:41:20 +03:00
AUTOMATIC 345028099d split codebase into multiple files; to anyone this affects negatively: sorry 2022-09-03 12:08:45 +03:00