Commit Graph

113 Commits

Author SHA1 Message Date
AUTOMATIC1111 1574e96729
Merge pull request #6510 from brkirch/unet16-upcast-precision
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25 19:12:29 +03:00
Kyle ee0a0da324 Add instruct-pix2pix hijack
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py

Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25 08:53:23 -05:00
brkirch 84d9ce30cb Add option for float32 sampling with float16 UNet
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
AUTOMATIC c1928cdd61 bring back short hashes to sd checkpoint selection 2023-01-19 18:58:08 +03:00
AUTOMATIC a5bbcd2153 fix bug with "Ignore selected VAE for..." option completely disabling VAE election
rework VAE resolving code to be more simple
2023-01-14 19:56:09 +03:00
AUTOMATIC 08c6f009a5 load hashes from cache for checkpoints that have them
add checkpoint hash to footer
2023-01-14 15:55:40 +03:00
AUTOMATIC febd2b722e update key to use with checkpoints' sha256 in cache 2023-01-14 13:37:55 +03:00
AUTOMATIC f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC 4bd490727e fix for an error caused by skipping initialization, for realsies this time: TypeError: expected str, bytes or os.PathLike object, not NoneType 2023-01-11 18:54:13 +03:00
AUTOMATIC 1a23dc32ac possible fix for fallback for fast model creation from config, attempt 2 2023-01-11 10:34:36 +03:00
AUTOMATIC 4fdacd31e4 possible fix for fallback for fast model creation from config 2023-01-11 10:24:56 +03:00
AUTOMATIC 0f8603a559 add support for transformers==4.25.1
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC ce3f639ec8 add more stuff to ignore when creating model from config
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC 0c3feb202c disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config 2023-01-10 14:08:29 +03:00
Vladimir Mandic 552d7b90bf
allow model load if previous model failed 2023-01-09 18:34:26 -05:00
AUTOMATIC 642142556d use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything 2023-01-04 15:09:53 +03:00
AUTOMATIC 68fbf4558f Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' 2023-01-04 14:53:03 +03:00
AUTOMATIC 0cd6399b8b fix broken inpainting model 2023-01-04 14:29:13 +03:00
AUTOMATIC 8d8a05a3bb find configs for models at runtime rather than when starting 2023-01-04 12:47:42 +03:00
AUTOMATIC 02d7abf514 helpful error message when trying to load 2.0 without config
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC 8f96f92899 call script callbacks for reloaded model after loading embeddings 2023-01-03 18:39:14 +03:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
Vladimir Mandic f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Nicolas Patry 5ba04f9ec0
Attempting to solve slow loads for `safetensors`.
Fixes #5893
2022-12-27 11:27:19 +01:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
linuxmobile ( リナックス ) 5a650055de
Removed lenght in sd_model at line 115
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
AUTOMATIC1111 eba60a42eb
Merge pull request #5627 from deanpress/patch-1
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
MrCheeze ec0a48826f unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False) 2022-12-11 11:18:34 -05:00
Dean van Dugteren 59c6511494
fix: fallback model_checkpoint if it's empty
This fixes the following error when SD attempts to start with a deleted checkpoint:

```
Traceback (most recent call last):
  File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
    start()
  File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
    webui.webui()
  File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
    initialize()
  File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
    modules.sd_models.load_model()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
    checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
MrCheeze bd81a09eac fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model 2022-12-10 11:29:26 -05:00
AUTOMATIC1111 ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
Jay Smith 1ed4f0e228 Depth2img model support 2022-12-08 20:50:08 -06:00
AUTOMATIC 0376da180c make it possible to save nai model using safetensors 2022-11-28 08:39:59 +03:00
AUTOMATIC dac9b6f15d add safetensors support for model merging #4869 2022-11-27 15:51:29 +03:00
AUTOMATIC 6074175faa add safetensors to requirements 2022-11-27 14:46:40 +03:00
AUTOMATIC1111 f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
MrCheeze 1e506657e1 no-half support for SD 2.0 2022-11-26 13:28:44 -05:00
Nicolas Patry 0efffbb407 Supporting `*.safetensors` format.
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
Muhammad Rizqi Nur 8662b5e57f Merge branch 'a1111' into vae-fix-none 2022-11-19 16:38:21 +07:00
Muhammad Rizqi Nur 2c5ca706a7 Remove no longer necessary parts and add vae_file safeguard 2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur c7be83bf02 Misc
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur abc1e79a5d Fix base VAE caching was done after loading VAE, also add safeguard 2022-11-19 11:41:41 +07:00
cluder eebf49592a restore #4035 behavior
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder 3b51d239ac - do not use ckpt cache, if disabled
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
AUTOMATIC 99043f3360 fix one of previous merges breaking the program 2022-11-04 11:20:42 +03:00
AUTOMATIC1111 24fc05cf57
Merge branch 'master' into fix-ckpt-cache 2022-11-04 10:54:17 +03:00
digburn 3780ad3ad8 fix: loading models without vae from cache 2022-11-04 00:43:00 +00:00
Muhammad Rizqi Nur fb3b564801 Merge branch 'master' into fix-ckpt-cache 2022-11-02 20:53:41 +07:00
AUTOMATIC f2a5cbe6f5 fix #3986 breaking --no-half-vae 2022-11-02 14:41:29 +03:00