Commit Graph

1735 Commits

Author SHA1 Message Date
Dudu Moshe ecadcdefe1
[Bug] scheduling_ddpm: fix variance in the case of learned_range type. (#2090)
scheduling_ddpm: fix variance in the case of learned_range type.

In the case of learned_range variance type, there are missing logs
and exponent comparing to the theory (see "Improved Denoising Diffusion
Probabilistic Models" section 3.1 equation 15:
https://arxiv.org/pdf/2102.09672.pdf).
2023-02-03 09:42:42 +01:00
Pedro Cuenca 2bbd532990
Docs: short section on changing the scheduler in Flax (#2181)
* Short doc on changing the scheduler in Flax.

* Apply fix from @patil-suraj

Co-authored-by: Suraj Patil <surajp815@gmail.com>

---------

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-02-02 18:52:21 +01:00
Adalberto 68ef0666e2
Create train_dreambooth_inpaint_lora.py (#2205)
* Create train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py

* Update train_dreambooth_inpaint_lora.py
2023-02-02 13:15:15 +01:00
Kashif Rasul 7ac95703cd
add CITATION.cff (#2211)
add citation.cff
2023-02-02 12:46:44 +01:00
Pedro Cuenca 3816c9ad9f
Update xFormers docs (#2208)
Update xFormers docs.
2023-02-01 19:56:32 +01:00
Patrick von Platen 8267c78445
[Loading] Better error message on missing keys (#2198)
* up

* finish
2023-02-01 14:22:39 +01:00
Muyang Li 4fc7084875
Fix a dimension bug in Transform2d (#2144)
The dimension does not match when `inner_dim` is not equal to `in_channels`.
2023-02-01 10:11:45 +01:00
Sayak Paul 9213d81bd0
add: guide on kerascv conversion tool. (#2169)
* add: guide on kerascv conversion tool.

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* address additional suggestions from review.

* change links to documentation-images.

* add separate links for training and inference goodies from diffusers.

* address Patrick's comments.

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-02-01 09:41:00 +01:00
Asad Memon dd3cae3327
Pass LoRA rank to LoRALinearLayer (#2191) 2023-02-01 09:40:02 +01:00
Patrick von Platen f73d0b6bec
[Docs] remove license (#2188) 2023-01-31 22:11:32 +01:00
Patrick von Platen d0d7ffffbd
[Docs] Add components to docs (#2175) 2023-01-31 22:11:14 +01:00
Abhishek Varma 87cf88ed3d
Use `requests` instead of `wget` in `convert_from_ckpt.py` (#2168)
-- This commit adopts `requests` in place of `wget` to fetch config `.yaml`
   files as part of `load_pipeline_from_original_stable_diffusion_ckpt` API.
-- This was done because in Windows PowerShell one needs to explicitly ensure
   that `wget` binary is part of the PATH variable. If not present, this leads
   to the code not being able to download the `.yaml` config file.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
Co-authored-by: Abhishek Varma <abhishek@nod-labs.com>
2023-01-31 14:35:45 +01:00
Patrick von Platen 60d915fbed make style 2023-01-31 11:46:48 +00:00
1lint d1efefe15e
[Breaking change] fix legacy inpaint noise and resize mask tensor (#2147)
* fix legacy inpaint noise and resize mask tensor

* updated legacy inpaint pipe test expected_slice
2023-01-31 12:44:35 +01:00
Sayak Paul 7d96b38b70
[examples] Fix CLI argument in the launch script command for text2image with LoRA (#2171)
* Update README.md

* Update README.md
2023-01-31 09:47:09 +01:00
Dudu Moshe cedafb8600
[Bug]: fix DDPM scheduler arbitrary infer steps count. (#2076)
scheduling_ddpm: fix evaluate with lower timesteps count than train.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-31 09:13:26 +01:00
Patrick von Platen 69caa96472 fix slow test 2023-01-31 07:39:30 +00:00
hysts da113364df
Add instance prompt to model card of lora dreambooth example (#2112) 2023-01-31 08:14:25 +01:00
Pedro Cuenca 44f6bc81c7
Don't copy when unwrapping model (#2166)
* Don't copy when unwrapping model.

Otherwise an exception is raised when using fp16.

* Remove unused import
2023-01-30 20:18:20 +01:00
Pedro Cuenca 164b6e0532
Section on using LoRA alpha / scale (#2139)
* Section on using LoRA alpha / scale.

* Accept suggestion

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Clarify on merge.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2023-01-30 14:14:46 +01:00
Patrick von Platen a6610db7a8
[Design philosopy] Create official doc (#2140)
* finish more

* finish philosophy

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Will Berman <wlbberman@gmail.com>
2023-01-30 09:27:37 +01:00
Pedro Cuenca 0b68101a13
`[diffusers-cli]` Fix typo in accelerate and transformers versions (#2154)
Fix typo in accelerate and transformers versions.
2023-01-30 09:04:45 +01:00
Ayan Das 125d783076
fix typo in EMAModel's load_state_dict() (#2151)
Possible typo introduced in 7c82a16fc1
2023-01-29 13:23:18 +01:00
Pedro Cuenca fdf70cb54b
Fix typo (#2138) 2023-01-27 20:08:56 +01:00
Nicolas Patry 20396e2bd2
Adding some `safetensors` docs. (#2122)
* Tmp.

* Adding more docs.

* Doc style.

* Remove the argument `use_safetensors=True`.

* doc-builder
2023-01-27 18:20:50 +01:00
Will Berman 2cf34e6db4
[from_pretrained] only load config one time (#2131) 2023-01-27 08:23:55 -08:00
Patrick von Platen 04ad948673 make style 2 - sorry 2023-01-27 16:54:40 +02:00
Patrick von Platen 97ef5e0665 make style 2023-01-27 16:52:04 +02:00
Patrick von Platen 31be42209d Don't call the Hub if `local_files_only` is specifiied (#2119)
Don't call the Hub if
2023-01-27 09:42:33 +02:00
RahulBhalley 43c5ac2be7
Typo fix: `torwards` -> `towards` (#2134) 2023-01-27 08:20:18 +01:00
Ji soo Kim c750a82374
Fix typos in loaders.py (#2137)
Fix typo in loaders.py
2023-01-27 08:20:07 +01:00
Patrick von Platen 0c39f53cbb
Allow lora from pipeline (#2129)
* [LoRA] All to use in inference with pipeline

* [LoRA] allow cross attention kwargs passed to pipeline

* finish
2023-01-27 08:19:46 +01:00
Will Berman 0a5948e7f4
remove redundant allow_patterns (#2130) 2023-01-26 13:22:28 -08:00
Patrick von Platen f653ded7ed
[LoRA] Make sure LoRA can be disabled after it's run (#2128) 2023-01-26 21:26:11 +01:00
Will Berman e92d43feb0
[nit] torch_dtype used twice in doc string (#2126) 2023-01-26 11:19:20 -08:00
hysts 7436e30c72
Fix model card of LoRA (#2114)
Fix
2023-01-26 19:08:45 +01:00
Will Berman 14976500ed
fuse attention mask (#2111)
* fuse attention mask

* lint

* use 0 beta when no attention mask re: @Birch-san
2023-01-26 08:36:07 -08:00
Cyberes 96af5bf7d9
Fix unable to save_pretrained when using pathlib (#1972)
* fix PosixPath is not JSON serializable

* use PosixPath

* forgot elif like a dummy
2023-01-26 16:53:34 +01:00
Patrick von Platen bbc2a03052
[Import Utils] Fix naming (#2118) 2023-01-26 15:54:59 +01:00
Suraj Patil 1e216be895
make scaling factor a config arg of vae/vqvae (#1860)
* make scaling factor cnfig arg of vae

* fix

* make flake happy

* fix ldm

* fix upscaler

* qualirty

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* solve conflicts, addres some comments

* examples

* examples min version

* doc

* fix type

* typo

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* remove duplicate line

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-26 14:37:19 +01:00
Pedro Cuenca 915a563611
Allow `UNet2DModel` to use arbitrary class embeddings (#2080)
* Allow `UNet2DModel` to use arbitrary class embeddings.

We can currently use class conditioning in `UNet2DConditionModel`, but
not in `UNet2DModel`. However, `UNet2DConditionModel` requires text
conditioning too, which is unrelated to other types of conditioning.
This commit makes it possible for `UNet2DModel` to be conditioned on
entities other than timesteps. This is useful for training /
research purposes. We can currently train models to perform
unconditional image generation or text-to-image generation, but it's not
straightforward to train a model to perform class-conditioned image
generation, if text conditioning is not required.

We could potentiall use `UNet2DConditionModel` for class-conditioning
without text embeddings by using down/up blocks without
cross-conditioning. However:
- The mid block currently requires cross attention.
- We are required to provide `encoder_hidden_states` to `forward`.

* Style

* Align class conditioning, add docstring for `num_class_embeds`.

* Copy docstring to versatile_diffusion UNetFlatConditionModel
2023-01-26 13:46:32 +01:00
Pedro Cuenca 0856137337
[textual inversion] Allow validation images (#2077)
* [textual inversion] Allow validation images.

* Change key to `validation`

* Specify format instead of transposing.

As discussed with @sayakpaul.

* Style

Co-authored-by: isamu-isozaki <isamu.website@gmail.com>
2023-01-26 09:20:03 +01:00
Suraj Patil 946d1cb200
[dreambooth] check the low-precision guard before preparing model (#2102)
check the dtype before preparing model
2023-01-25 11:06:33 -08:00
Patrick von Platen 09779cbb40
[Bump version] 0.13.0dev0 & Deprecate `predict_epsilon` (#2109)
* [Bump version] 0.13

* Bump model up

* up
2023-01-25 17:59:02 +01:00
Patrick von Platen b0cc7c202b make style 2023-01-25 16:03:56 +02:00
Oren WANG fb98acf03b
[lora] Fix bug with training without validation (#2106) 2023-01-25 14:56:13 +01:00
Patrick von Platen 180841bbde Release: v0.12.0 2023-01-25 15:48:00 +02:00
Patrick von Platen 6ba2231d72
Reproducibility 3/3 (#1924)
* make tests deterministic

* run slow tests

* prepare for testing

* finish

* refactor

* add print statements

* finish more

* correct some test failures

* more fixes

* set up to correct tests

* more corrections

* up

* fix more

* more prints

* add

* up

* up

* up

* uP

* uP

* more fixes

* uP

* up

* up

* up

* up

* fix more

* up

* up

* clean tests

* up

* up

* up

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make

* correct

* finish

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-25 13:44:22 +01:00
Patrick von Platen 008c22d334
Improve transformers versions handling (#2104) 2023-01-25 12:50:54 +01:00
Patrick von Platen b562b6611f
Allow directly passing text embeddings to Stable Diffusion Pipeline for prompt weighting (#2071)
* add text embeds to sd

* add text embeds to sd

* finish tests

* finish

* finish

* make style

* fix tests

* make style

* make style

* up

* better docs

* fix

* fix

* new try

* up

* up

* finish
2023-01-25 12:29:49 +01:00