Commit Graph

290 Commits

Author SHA1 Message Date
AUTOMATIC1111 6ad666e479 more changes for #13865: fix formatting, rename the function, add comment and add a readme entry 2023-11-05 19:46:20 +03:00
AUTOMATIC1111 80d639a440 linter 2023-11-05 19:32:21 +03:00
AUTOMATIC1111 ff805d8d0e
Merge branch 'dev' into master 2023-11-05 19:30:57 +03:00
Ritesh Gangnani 44c5097375 Use devices.torch_gc() instead of empty_cache() 2023-11-05 20:31:57 +05:30
Ritesh Gangnani ff1609f91e Add SSD-1B as a supported model 2023-11-05 19:13:49 +05:30
Kohaku-Blueleaf d4d3134f6d ManualCast for 10/16 series gpu 2023-10-28 15:24:26 +08:00
Kohaku-Blueleaf dda067f64d ignore mps for fp8 2023-10-25 19:53:22 +08:00
Kohaku-Blueleaf bf5067f50c Fix alphas cumprod 2023-10-25 12:54:28 +08:00
Kohaku-Blueleaf 4830b25136 Fix alphas_cumprod dtype 2023-10-25 11:53:37 +08:00
Kohaku-Blueleaf 1df6c8bfec fp8 for TE 2023-10-25 11:36:43 +08:00
Kohaku-Blueleaf 9c1eba2af3 Fix lint 2023-10-24 02:11:27 +08:00
Kohaku-Blueleaf eaa9f5162f Add CPU fp8 support
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear)

And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-24 01:49:05 +08:00
Kohaku-Blueleaf 5f9ddfa46f Add sdxl only arg 2023-10-19 23:57:22 +08:00
Kohaku-Blueleaf 7c128bbdac Add fp8 for sd unet 2023-10-19 13:56:17 +08:00
AUTOMATIC1111 282903bb67 repair unload sd checkpoint button 2023-10-15 09:41:02 +03:00
AUTOMATIC1111 0619df9835 use shallow copy for #13535 2023-10-14 08:01:04 +03:00
AUTOMATIC1111 7cc96429f2
Merge pull request #13535 from chu8129/dev
fix: checkpoints_loaded:{checkpoint:state_dict}, model.load_state_dict issue in dict value empty
2023-10-14 08:00:04 +03:00
wangqiuwen 770ee23f18 reverst 2023-10-07 15:38:50 +08:00
wangqiuwen 76010a51ef up 2023-10-07 15:36:01 +08:00
AUTOMATIC1111 951842d785
Merge pull request #13139 from AUTOMATIC1111/ckpt-dir-path-separator
fix `--ckpt-dir` path separator and option use `short name` for checkpoint dropdown
2023-09-30 10:02:28 +03:00
AUTOMATIC1111 87b50397a6 add missing import, simplify code, use patches module for #13276 2023-09-30 09:11:31 +03:00
AUTOMATIC1111 e309583f29
Merge pull request #13276 from woweenie/patch-1
patch DDPM.register_betas so that users can put given_betas in model yaml
2023-09-30 09:01:12 +03:00
王秋文/qwwang 8e355fbd75 fix 2023-09-18 16:45:42 +08:00
woweenie d9d94141dc
patch DDPM.register_betas so that users can put given_betas in model yaml 2023-09-15 18:59:44 +02:00
qiuwen.wang 813535d38b
use dict[key]=model; did not update orderdict order, should use move to end 2023-09-15 18:23:23 +08:00
w-e-w e4726cccf9 parsing string to path 2023-09-08 09:46:34 +09:00
AUTOMATIC1111 503bd3fc0f keep order in list of checkpoints when loading model that doesn't have a checksum 2023-08-30 08:54:41 +03:00
AUTOMATIC1111 f874b1bcad keep order in list of checkpoints when loading model that doesn't have a checksum 2023-08-30 08:54:31 +03:00
AUTOMATIC1111 0232a987bb set devices.dtype_unet correctly 2023-08-23 07:10:43 +03:00
AUTOMATIC1111 016554e437 add --medvram-sdxl 2023-08-22 18:49:08 +03:00
Uminosachi be301f224d Fix for consistency with shared.opts.sd_vae of UI 2023-08-21 11:28:53 +09:00
Uminosachi 549b0fc526 Change where VAE state are stored in model 2023-08-20 23:06:51 +09:00
Uminosachi af5d2e8e5f Change to access sd_model attribute with dot 2023-08-20 20:08:22 +09:00
Uminosachi 5159edbf0e Store base_vae and loaded_vae_file in sd_model 2023-08-20 19:44:37 +09:00
Uminosachi 042e1d5d0b Fix SD VAE switch error after model reuse 2023-08-20 15:00:14 +09:00
AUTOMATIC1111 0dc74545c0 resolve the issue with loading fp16 checkpoints while using --no-half 2023-08-17 07:54:07 +03:00
AUTOMATIC1111 eaba3d7349 send weights to target device instead of CPU memory 2023-08-16 12:11:01 +03:00
AUTOMATIC1111 57e59c14c8 Revert "send weights to target device instead of CPU memory"
This reverts commit 0815c45bcd.
2023-08-16 11:28:00 +03:00
AUTOMATIC1111 0815c45bcd send weights to target device instead of CPU memory 2023-08-16 10:44:17 +03:00
AUTOMATIC1111 64311faa68 put refiner into main UI, into the new accordions section
add VAE from main model into infotext, not from refiner model
option to make scripts UI without gr.Group
fix inconsistencies with refiner when usings samplers that do more denoising than steps
2023-08-12 12:39:59 +03:00
AUTOMATIC1111 ac8a5d18d3 resolve merge issues 2023-08-10 17:04:59 +03:00
AUTOMATIC1111 70a01cd444 Merge branch 'dev' into refiner 2023-08-10 17:04:38 +03:00
AUTOMATIC1111 aa10faa591 fix checkpoint name jumping around in the list of checkpoints for no good reason 2023-08-09 14:47:44 +03:00
AUTOMATIC1111 c8c48640e6
Merge pull request #12426 from AUTOMATIC1111/split_shared
Split shared.py into multiple files
2023-08-09 14:40:06 +03:00
AUTOMATIC1111 386245a264 split shared.py into multiple files; should resolve all circular reference import errors related to shared.py 2023-08-09 10:25:35 +03:00
AUTOMATIC1111 54c3e5c913 Merge branch 'dev' into refiner 2023-08-08 21:49:47 +03:00
Uminosachi 8c200c2156 Fix mismatch between shared.sd_model & shared.opts 2023-08-08 10:48:03 +09:00
AUTOMATIC1111 6e7828e1d2 apply unet overrides after switching model 2023-08-07 08:16:20 +03:00
AUTOMATIC1111 c96e4750d8 SD VAE rework 2
- the setting for preferring opts.sd_vae has been inverted and reworded
- resolve_vae function made easier to read and now returns an object rather than a tuple
- if the checkbox for overriding per-model preferences is checked, opts.sd_vae overrides checkpoint user metadata
- changing VAE in user metadata  for currently loaded model immediately applies the selection
2023-08-07 08:07:20 +03:00
AUTOMATIC1111 f1975b0213 initial refiner support 2023-08-06 17:01:07 +03:00
AUTOMATIC1111 22ecb78b51 Merge branch 'dev' into multiple_loaded_models 2023-08-05 07:52:29 +03:00
AUTOMATIC1111 0ae2767ae6
Merge pull request #12181 from AUTOMATIC1111/hires_checkpoint
Hires fix change checkpoint
2023-08-05 07:47:34 +03:00
AnyISalIn 24f21583cd fix: prevent cache model.state_dict() after model hijack
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-08-04 11:43:27 +08:00
AUTOMATIC1111 20549a50cb add style editor dialog
rework toprow for img2img and txt2img to use a class with fields
fix the console error when editing checkpoint user metadata
2023-08-03 23:31:13 +03:00
AUTOMATIC1111 390bffa81b repair merge error 2023-08-01 17:13:15 +03:00
AUTOMATIC1111 0c9b1e7969 Merge branch 'dev' into multiple_loaded_models 2023-08-01 16:55:55 +03:00
AUTOMATIC1111 07be13caa3 add metadata to checkpoint merger 2023-08-01 08:27:54 +03:00
AUTOMATIC1111 4b43480fe8 show metadata for SD checkpoints in the extra networks UI 2023-08-01 07:08:11 +03:00
AUTOMATIC1111 b235022c61 option to keep multiple models in memory 2023-08-01 00:24:48 +03:00
AUTOMATIC1111 4d9b096663 additional memory improvements when switching between models of different types 2023-07-31 10:43:31 +03:00
AUTOMATIC1111 3bca90b249 hires fix checkpoint selection 2023-07-30 13:48:27 +03:00
AUTOMATIC1111 0a89cd1a58 Use less RAM when creating models 2023-07-24 22:08:08 +03:00
AUTOMATIC1111 b270ded268 fix the issue with /sdapi/v1/options failing (this time for sure!)
fix automated tests downloading CLIP model
2023-07-18 18:10:04 +03:00
brkirch f0e2098f1a Add support for `--upcast-sampling` with SD XL 2023-07-18 00:39:50 -04:00
AUTOMATIC1111 699108bfbb hide cards for networks of incompatible stable diffusion version in Lora extra networks interface 2023-07-17 18:56:22 +03:00
AUTOMATIC1111 b7dbeda0d9 linter 2023-07-14 09:19:08 +03:00
AUTOMATIC1111 6d8dcdefa0 initial SDXL refiner support 2023-07-14 09:16:01 +03:00
AUTOMATIC1111 6c5f83b19b add support for SDXL loras with te1/te2 modules 2023-07-13 21:17:50 +03:00
AUTOMATIC1111 e16ebc917d repair --no-half for SDXL 2023-07-13 17:32:35 +03:00
AUTOMATIC1111 da464a3fb3 SDXL support 2023-07-12 23:52:43 +03:00
AUTOMATIC1111 af081211ee getting SD2.1 to run on SDXL repo 2023-07-11 21:16:43 +03:00
Aarni Koskela da468a585b Fix typo: checkpoint_alisases 2023-07-08 17:28:42 +03:00
AUTOMATIC1111 da8916f926 added torch.mps.empty_cache() to torch_gc()
changed a bunch of places that use torch.cuda.empty_cache() to use torch_gc() instead
2023-07-08 17:13:18 +03:00
AUTOMATIC 24129368f1 send tensors to the correct device when loading from safetensors file with memmap disabled for #11260 2023-06-27 09:19:04 +03:00
AUTOMATIC1111 14196548c5
Merge pull request #11260 from dhwz/dev
fix very slow loading speed of .safetensors files
2023-06-27 09:11:08 +03:00
dhwz 41363e0d27 fix very slow loading speed of .safetensors files 2023-06-16 18:10:15 +02:00
Aarni Koskela 165ab44f03 Use os.makedirs(..., exist_ok=True) 2023-06-13 12:35:43 +03:00
AUTOMATIC f1533de982 assign devices.dtype early because it's needed before the model is loaded 2023-06-01 07:28:20 +03:00
AUTOMATIC1111 d92a6acf0e
Merge pull request #10739 from linkoid/fix-ui-debug-mode-exit
Fix --ui-debug-mode exit
2023-05-27 20:02:07 +03:00
AUTOMATIC 339b531570 custom unet support 2023-05-27 15:47:33 +03:00
linkoid 1f0fdede17 Show full traceback in get_sd_model()
to reveal if an error is caused by an extension
2023-05-26 15:25:31 -04:00
linkoid 3829afec36 Remove exit() from select_checkpoint()
Raising a FileNotFoundError instead.
2023-05-26 15:08:53 -04:00
AUTOMATIC 3366e494a1 option to pad prompt/neg prompt to be same length 2023-05-22 00:13:53 +03:00
Aarni Koskela 71f4a4afdf Deduplicate webui.py initial-load/reload code 2023-05-19 17:38:42 +03:00
AUTOMATIC cd8a510ca9 if sd_model is None, do not always try to load it 2023-05-18 15:47:43 +03:00
AUTOMATIC 9fd6c1e343 move some settings to the new Optimization page
add slider for token merging for img2img
rework StableDiffusionProcessing to have the token_merging_ratio field
fix a bug with applying png optimizations for live previews when they shouldn't be applied
2023-05-17 20:22:54 +03:00
AUTOMATIC1111 4071fa4a12
Merge pull request #10451 from dennissheng/master
not clear checkpoints cache when config changes
2023-05-17 08:24:56 +03:00
dennissheng 54f657ffbc not clear checkpoints cache when config changes 2023-05-17 10:47:02 +08:00
AUTOMATIC 1a43524018 fix model loading twice in some situations 2023-05-14 13:27:50 +03:00
papuSpartan ac83627a31 heavily simplify 2023-05-13 10:23:42 -05:00
papuSpartan 75b3692920 Merge branch 'dev' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into tomesd 2023-05-11 22:40:17 -05:00
Aarni Koskela 49a55b410b Autofix Ruff W (not W605) (mostly whitespace) 2023-05-11 20:29:11 +03:00
AUTOMATIC 4b854806d9 F401 fixes for ruff 2023-05-10 09:02:23 +03:00
AUTOMATIC f741a98bac imports cleanup for ruff 2023-05-10 08:43:42 +03:00
AUTOMATIC 762265eab5 autofixes from ruff 2023-05-10 07:52:45 +03:00
Aarni Koskela 3ba6c3c83c Fix up string formatting/concatenation to f-strings where feasible 2023-05-09 22:25:39 +03:00
papuSpartan f08ae96115 resolve merge conflicts and swap to dev branch for now 2023-05-03 02:21:50 -05:00
AUTOMATIC b1717c0a48 do not load wait for shared.sd_model to load at startup 2023-05-02 09:08:00 +03:00
papuSpartan dff60e2e74
Update sd_models.py 2023-04-10 04:10:50 -05:00
papuSpartan 5c8e53d5e9 Allow different merge ratios to be used for each pass. Make toggle cmd flag work again. Remove ratio flag. Remove warning about controlnet being incompatible 2023-04-04 02:26:44 -05:00