Commit Graph

38 Commits

Author SHA1 Message Date
brkirch e3b53fd295 Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.

In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25 01:13:04 -05:00
AUTOMATIC 59146621e2 better support for xformers flash attention on older versions of torch 2023-01-23 16:40:20 +03:00
Takuma Mori 3262e825cc add --xformers-flash-attention option & impl 2023-01-21 17:42:04 +09:00
AUTOMATIC 40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
brkirch c18add68ef Added license 2023-01-06 16:42:47 -05:00
brkirch b95a4c0ce5 Change sub-quad chunk threshold to use percentage 2023-01-06 01:01:51 -05:00
brkirch d782a95967 Add Birch-san's sub-quadratic attention implementation 2023-01-06 00:14:13 -05:00
brkirch 35b1775b32 Use other MPS optimization for large q.shape[0] * q.shape[1]
Check if q.shape[0] * q.shape[1] is 2**18 or larger and use the lower memory usage MPS optimization if it is. This should prevent most crashes that were occurring at certain resolutions (e.g. 1024x1024, 2048x512, 512x2048).

Also included is a change to check slice_size and prevent it from being divisible by 4096 which also results in a crash. Otherwise a crash can occur at 1024x512 or 512x1024 resolution.
2022-12-20 21:30:00 -05:00
AUTOMATIC 505ec7e4d9 cleanup some unneeded imports for hijack files 2022-12-10 09:17:39 +03:00
AUTOMATIC 7dbfd8a7d8 do not replace entire unet for the resolution hack 2022-12-10 09:14:45 +03:00
Billy Cao adb6cb7619 Patch UNet Forward to support resolutions that are not multiples of 64
Also modifed the UI to no longer step in 64
2022-11-23 18:11:24 +08:00
Cheka 2fd7935ef4 Remove wrong self reference in CUDA support for invokeai 2022-10-19 09:35:53 +03:00
C43H66N12O12S2 c71008c741 Update sd_hijack_optimizations.py 2022-10-18 11:53:04 +03:00
C43H66N12O12S2 84823275e8 readd xformers attnblock 2022-10-18 11:53:04 +03:00
C43H66N12O12S2 2043c4a231 delete xformers attnblock 2022-10-18 11:53:04 +03:00
brkirch 861db783c7 Use apply_hypernetwork function 2022-10-11 17:24:00 +03:00
brkirch 574c8e554a Add InvokeAI and lstein to credits, add back CUDA support 2022-10-11 17:24:00 +03:00
brkirch 98fd5cde72 Add check for psutil 2022-10-11 17:24:00 +03:00
brkirch c0484f1b98 Add cross-attention optimization from InvokeAI
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
2022-10-11 17:24:00 +03:00
AUTOMATIC 873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
AUTOMATIC 530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
AUTOMATIC 948533950c replace duplicate code with a function 2022-10-11 11:10:17 +03:00
C43H66N12O12S2 3e7a981194 remove functorch 2022-10-10 19:54:07 +03:00
Fampai 122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC e6e42f98df make --force-enable-xformers work without needing --xformers 2022-10-08 22:12:23 +03:00
AUTOMATIC f9c5da1592 add fallback for xformers_attnblock_forward 2022-10-08 19:05:19 +03:00
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC 7ff1170a2e emergency fix for xformers (continue + shared) 2022-10-08 16:33:39 +03:00
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2 69d0053583
update sd_hijack_opt to respect new env variables 2022-10-08 16:21:40 +03:00
C43H66N12O12S2 76a616fa6b
Update sd_hijack_optimizations.py 2022-10-08 11:55:38 +03:00
C43H66N12O12S2 5d54f35c58
add xformers attnblock and hypernetwork support 2022-10-08 11:55:02 +03:00
brkirch f2055cb1d4 Add hypernetwork support to split cross attention v1
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
C43H66N12O12S2 c9cc65b201
switch to the proper way of calling xformers 2022-10-08 04:09:18 +03:00
AUTOMATIC bad7cb29ce added support for hypernetworks (???) 2022-10-07 10:17:52 +03:00
C43H66N12O12S2 f174fb2922
add xformers attention 2022-10-07 05:21:49 +03:00
Jairo Correa ad0cc85d1f Merge branch 'master' into stable 2022-10-02 18:31:19 -03:00
AUTOMATIC 820f1dc96b initial support for training textual inversion 2022-10-02 15:03:39 +03:00