Commit Graph

103 Commits

Author SHA1 Message Date
AUTOMATIC 1a23dc32ac possible fix for fallback for fast model creation from config, attempt 2 2023-01-11 10:34:36 +03:00
AUTOMATIC 4fdacd31e4 possible fix for fallback for fast model creation from config 2023-01-11 10:24:56 +03:00
AUTOMATIC 0f8603a559 add support for transformers==4.25.1
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC ce3f639ec8 add more stuff to ignore when creating model from config
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC 0c3feb202c disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config 2023-01-10 14:08:29 +03:00
Vladimir Mandic 552d7b90bf
allow model load if previous model failed 2023-01-09 18:34:26 -05:00
AUTOMATIC 642142556d use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything 2023-01-04 15:09:53 +03:00
AUTOMATIC 68fbf4558f Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' 2023-01-04 14:53:03 +03:00
AUTOMATIC 0cd6399b8b fix broken inpainting model 2023-01-04 14:29:13 +03:00
AUTOMATIC 8d8a05a3bb find configs for models at runtime rather than when starting 2023-01-04 12:47:42 +03:00
AUTOMATIC 02d7abf514 helpful error message when trying to load 2.0 without config
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC 8f96f92899 call script callbacks for reloaded model after loading embeddings 2023-01-03 18:39:14 +03:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
Vladimir Mandic f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Nicolas Patry 5ba04f9ec0
Attempting to solve slow loads for `safetensors`.
Fixes #5893
2022-12-27 11:27:19 +01:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
linuxmobile ( リナックス ) 5a650055de
Removed lenght in sd_model at line 115
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
AUTOMATIC1111 eba60a42eb
Merge pull request #5627 from deanpress/patch-1
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
MrCheeze ec0a48826f unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False) 2022-12-11 11:18:34 -05:00
Dean van Dugteren 59c6511494
fix: fallback model_checkpoint if it's empty
This fixes the following error when SD attempts to start with a deleted checkpoint:

```
Traceback (most recent call last):
  File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
    start()
  File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
    webui.webui()
  File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
    initialize()
  File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
    modules.sd_models.load_model()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
    checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
MrCheeze bd81a09eac fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model 2022-12-10 11:29:26 -05:00
AUTOMATIC1111 ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
Jay Smith 1ed4f0e228 Depth2img model support 2022-12-08 20:50:08 -06:00
AUTOMATIC 0376da180c make it possible to save nai model using safetensors 2022-11-28 08:39:59 +03:00
AUTOMATIC dac9b6f15d add safetensors support for model merging #4869 2022-11-27 15:51:29 +03:00
AUTOMATIC 6074175faa add safetensors to requirements 2022-11-27 14:46:40 +03:00
AUTOMATIC1111 f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
MrCheeze 1e506657e1 no-half support for SD 2.0 2022-11-26 13:28:44 -05:00
Nicolas Patry 0efffbb407 Supporting `*.safetensors` format.
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
Muhammad Rizqi Nur 8662b5e57f Merge branch 'a1111' into vae-fix-none 2022-11-19 16:38:21 +07:00
Muhammad Rizqi Nur 2c5ca706a7 Remove no longer necessary parts and add vae_file safeguard 2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur c7be83bf02 Misc
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur abc1e79a5d Fix base VAE caching was done after loading VAE, also add safeguard 2022-11-19 11:41:41 +07:00
cluder eebf49592a restore #4035 behavior
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder 3b51d239ac - do not use ckpt cache, if disabled
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
AUTOMATIC 99043f3360 fix one of previous merges breaking the program 2022-11-04 11:20:42 +03:00
AUTOMATIC1111 24fc05cf57
Merge branch 'master' into fix-ckpt-cache 2022-11-04 10:54:17 +03:00
digburn 3780ad3ad8 fix: loading models without vae from cache 2022-11-04 00:43:00 +00:00
Muhammad Rizqi Nur fb3b564801 Merge branch 'master' into fix-ckpt-cache 2022-11-02 20:53:41 +07:00
AUTOMATIC f2a5cbe6f5 fix #3986 breaking --no-half-vae 2022-11-02 14:41:29 +03:00
Muhammad Rizqi Nur 056f06d373 Reload VAE without reloading sd checkpoint 2022-11-02 12:51:46 +07:00
Muhammad Rizqi Nur f8c6468d42
Merge branch 'master' into vae-picker 2022-11-02 00:25:08 +07:00
Jairo Correa af758e97fa Unload sd_model before loading the other 2022-11-01 04:01:49 -03:00
Muhammad Rizqi Nur bf7a699845 Fix #4035 for real now 2022-10-31 16:27:27 +07:00
Muhammad Rizqi Nur 36966e3200 Fix #4035 2022-10-31 15:38:58 +07:00
Muhammad Rizqi Nur 726769da35 Checkpoint cache by combination key of checkpoint and vae 2022-10-31 15:22:03 +07:00
Muhammad Rizqi Nur cb31abcf58 Settings to select VAE 2022-10-30 21:54:31 +07:00
AUTOMATIC1111 9553a7e071
Merge pull request #3818 from jwatzman/master
Reduce peak memory usage when changing models
2022-10-29 09:16:00 +03:00
Antonio 5d5dc64064
Natural sorting for dropdown checkpoint list
Example:

Before					After

11.ckpt					11.ckpt
ab.ckpt					ab.ckpt
ade_pablo_step_1000.ckpt	ade_pablo_step_500.ckpt			
ade_pablo_step_500.ckpt	ade_pablo_step_1000.ckpt	
ade_step_1000.ckpt		ade_step_500.ckpt
ade_step_1500.ckpt		ade_step_1000.ckpt
ade_step_2000.ckpt		ade_step_1500.ckpt
ade_step_2500.ckpt		ade_step_2000.ckpt
ade_step_3000.ckpt		ade_step_2500.ckpt
ade_step_500.ckpt			ade_step_3000.ckpt
atp_step_5500.ckpt			atp_step_5500.ckpt
model1.ckpt				model1.ckpt
model10.ckpt				model10.ckpt
model1000.ckpt			model33.ckpt
model33.ckpt				model50.ckpt
model400.ckpt			model400.ckpt
model50.ckpt				model1000.ckpt
moo44.ckpt				moo44.ckpt
v1-4-pruned-emaonly.ckpt	v1-4-pruned-emaonly.ckpt
v1-5-pruned-emaonly.ckpt	v1-5-pruned-emaonly.ckpt
v1-5-pruned.ckpt			v1-5-pruned.ckpt
v1-5-vae.ckpt				v1-5-vae.ckpt
2022-10-28 05:49:39 +02:00
Josh Watzman b50ff4f4e4 Reduce peak memory usage when changing models
A few tweaks to reduce peak memory usage, the biggest being that if we
aren't using the checkpoint cache, we shouldn't duplicate the model
state dict just to immediately throw it away.

On my machine with 16GB of RAM, this change means I can typically change
models, whereas before it would typically OOM.
2022-10-27 22:01:06 +01:00