Commit Graph

83 Commits

Author SHA1 Message Date
Patrick von Platen b500df1155
[Docs] Add loading script (#1174)
* add loading script

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* correct

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* uP

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-07 17:15:41 +01:00
Pedro Cuenca 0dd8c6b4db
Fix community pipeline links (#1162)
* Change title to match the sidebar in _toctree.

* Fix custom pipe link, add link to contribute.

* Fix community pipeline links.
2022-11-07 14:32:51 +01:00
Cheng Lu b4a1ed8544
Add multistep DPM-Solver discrete scheduler (#1132)
* add dpmsolver discrete pytorch scheduler

* fix some typos in dpm-solver pytorch

* add dpm-solver pytorch in stable-diffusion pipeline

* add jax/flax version dpm-solver

* change code style

* change code style

* add docs

* add `add_noise` method for dpmsolver

* add pytorch unit test for dpmsolver

* add dummy object for pytorch dpmsolver

* Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* resolve the code comments

* rename the file

* change class name

* fix code style

* add auto docs for dpmsolver multistep

* add more explanations for the stabilizing trick (for steps < 15)

* delete the dummy file

* change the API name of predict_epsilon, algorithm_type and solver_type

* add compatible lists

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-06 22:49:55 +01:00
Chen Wu (吴尘) 9d8943b7e7
Add CycleDiffusion pipeline using Stable Diffusion (#888)
* Add CycleDiffusion pipeline for Stable Diffusion

* Add the option of passing noise to DDIMScheduler

Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.

* Update README.md

* Update README.md

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update import format

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* add two tests

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update README.md

* Rename pipeline name as suggested in the latest reviewer comment

* Update test_pipelines.py

* Update test_pipelines.py

* Update test_pipelines.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Remove the generator

This generator does not control all randomness during sampling, which can be misleading.

* Update optimal hyperparameters

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

* uP

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

* Replace assert with ValueError

* finish docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-04 20:51:06 +01:00
Pedro Cuenca 118c5be94a
Docs: Do not require PyTorch nightlies (#1123)
Do not require PyTorch nightlies.
2022-11-03 18:17:23 +01:00
Suraj Patil 7482178162
default fast model loading 🔥 (#1115)
* make accelerate hard dep

* default fast init

* move params to cpu when device map is None

* handle device_map=None

* handle torch < 1.9

* remove device_map="auto"

* style

* add accelerate in torch extra

* remove accelerate from extras["test"]

* raise an error if torch is available but not accelerate

* update installation docs

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* improve defautl loading speed even further, allow disabling fats loading

* address review comments

* adapt the tests

* fix test_stable_diffusion_fast_load

* fix test_read_init

* temp fix for dummy checks

* Trigger Build

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 17:25:57 +01:00
Will Berman ef2ea33c3b
VQ-diffusion (#658)
* Changes for VQ-diffusion VQVAE

Add specify dimension of embeddings to VQModel:
`VQModel` will by default set the dimension of embeddings to the number
of latent channels. The VQ-diffusion VQVAE has a smaller
embedding dimension, 128, than number of latent channels, 256.

Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
unet block helpers. VQ-diffusion's VQVAE uses those two block types.

* Changes for VQ-diffusion transformer

Modify attention.py so SpatialTransformer can be used for
VQ-diffusion's transformer.

SpatialTransformer:
- Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
- `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
- modified forward pass to take optional timestep embeddings

ImagePositionalEmbeddings:
- added to provide positional embeddings to discrete inputs for latent pixels

BasicTransformerBlock:
- norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
- modified forward pass to take optional timestep embeddings

CrossAttention:
- now may optionally take a bias parameter for its query, key, and value linear layers

FeedForward:
- Internal layers are now configurable

ApproximateGELU:
- Activation function in VQ-diffusion's feedforward layer

AdaLayerNorm:
- Norm layer modified to incorporate timestep embeddings

* Add VQ-diffusion scheduler

* Add VQ-diffusion pipeline

* Add VQ-diffusion convert script to diffusers

* Add VQ-diffusion dummy objects

* Add VQ-diffusion markdown docs

* Add VQ-diffusion tests

* some renaming

* some fixes

* more renaming

* correct

* fix typo

* correct weights

* finalize

* fix tests

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish

* finish

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-03 16:10:28 +01:00
Revist d38c804320
feat: add repaint (#974)
* feat: add repaint

* fix: fix quality check with `make fix-copies`

* fix: remove old unnecessary arg

* chore: change default to DDPM (looks better in experiments)

* ".to(device)" changed to "device="

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific and change shape

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: add preprocessing for image and mask

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: update test

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/pipelines/repaint/pipeline_repaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add docs and examples

* Fix toctree

Co-authored-by: fja <fja@zurich.ibm.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 15:42:46 +01:00
Patrick von Platen d53ffbbdf4
Rename latent (#1102)
* Rename latent

* uP
2022-11-02 11:59:00 +01:00
Suraj Patil 8608795711
[docs] add euler scheduler in docs, how to use differnet schedulers (#1089)
* add euler scheduler in docs

* add a section for how to use different scheds

* address patrck's comments
2022-11-02 11:32:46 +01:00
MatthieuTPHR 98c42134a5
Up to 2x speedup on GPUs using memory efficient attention (#532)
* 2x speedup using memory efficient attention

* remove einops dependency

* Swap K, M in op instantiation

* Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter

* make xformers a soft dependency

* remove one-liner functions

* change one letter variable to appropriate names

* Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method

* Add memory efficient attention toggle to img2img and inpaint pipelines

* Clearer management of xformers' availability

* update optimizations markdown to add info about memory efficient attention

* add benchmarks for TITAN RTX

* More detailed explanation of how the mem eff benchmark were ran

* Removing autocast from optimization markdown

* import_utils: import torch only if is available

Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
2022-11-02 10:29:06 +01:00
Patrick von Platen 010bc4ea19 incorrect model id 2022-10-31 16:35:59 +00:00
Patrick von Platen c18941b01a
[Better scheduler docs] Improve usage examples of schedulers (#890)
* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-31 17:26:30 +01:00
Nathan Lambert 12fd0736dc
clean incomplete pages (#1008) 2022-10-29 09:28:26 +02:00
Minwoo Byeon fc0ca47456
Fix speedup ratio in fp16.mdx (#837) 2022-10-29 09:26:23 +02:00
Pedro Cuenca 6b185b6acd
Update training and fine-tuning docs (#1020)
* Update training and fine-tuning docs.

* Update examples README.

* Update README.

* Add Flax fine-tuning section.

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-28 21:02:08 +02:00
Pi Esposito de00c63217
Document sequential CPU offload method on Stable Diffusion pipeline (#1024)
* document cpu offloading method

* address review comments

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-27 16:52:21 +02:00
Ella Charlaix e2243de5f2
Fix typo in documentation title (#975) 2022-10-25 20:20:16 +02:00
Patrick von Platen 88fa6b7d68
[Dance Diffusion] Add dance diffusion (#803)
* start

* add more logic

* Update src/diffusers/models/unet_2d_condition_flax.py

* match weights

* up

* make model work

* making class more general, fixing missed file rename

* small fix

* make new conversion work

* up

* finalize conversion

* up

* first batch of variable renamings

* remove c and c_prev var names

* add mid and out block structure

* add pipeline

* up

* finish conversion

* finish

* upload

* more fixes

* Apply suggestions from code review

* add attr

* up

* uP

* up

* finish tests

* finish

* uP

* finish

* fix test

* up

* naming consistency in tests

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* remove hardcoded 16

* Remove bogus

* fix some stuff

* finish

* improve logging

* docs

* upload

Co-authored-by: Nathan Lambert <nol@berkeley.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-25 18:39:25 +02:00
Pedro Cuenca 3d02c92187
mps changes for PyTorch 1.13 (#926)
* Docs: refer to pre-RC version of PyTorch 1.13.0.

* Remove temporary workaround for unavailable op.

* Update comment to make it less ambiguous.

* Remove use of contiguous in mps.

It appears to not longer be necessary.

* Special case: use einsum for much better performance in mps

* Update mps docs.

* Minor doc update.

* Accept suggestion

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-25 16:41:51 +02:00
Nathan Lambert 2fb8fafa4b
add community pipeline docs; add minimal text to some empty doc pages (#930)
* add community pipeline docs

* fix style in code snippets (lol)

* clean up loading docs

* add license to doc files

* fix some weird links
2022-10-24 14:20:08 -07:00
apolinario 8aac1f99d7
v1-5 docs updates (#921)
* Update README.md

Additionally add FLAX so the model card can be slimmer and point to this page

* Find and replace all

* v-1-5 -> v1-5

* revert test changes

* Update README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update docs/source/quicktour.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update docs/source/quicktour.mdx

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Revert certain references to v1-5

* Docs changes

* Apply suggestions from code review

Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-24 22:50:23 +02:00
Patrick von Platen 83f8a5ff70
[Stable Diffusion] Add components function (#889)
* [Stable Diffusion] Add components function

* uP
2022-10-20 13:28:11 +02:00
Pedro Cuenca 8124863d1f
Initial docs update for new in-painting pipeline (#910)
Docs update for new in-painting pipeline.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-19 17:31:23 +02:00
Patrick von Platen c1b6ea3dce
Update img2img.mdx 2022-10-12 00:52:30 +02:00
Pedro Cuenca 24b8b5cf5e
`mps`: Alternative implementation for `repeat_interleave` (#766)
* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-11 20:30:09 +02:00
Omar Sanseviero 757babfcad
Fix indentation in the code example (#802)
Update custom_pipelines.mdx
2022-10-11 20:26:52 +02:00
anton-l 970e30606c Revert "[v0.4.0] Temporarily remove Flax modules from the public API (#755)"
This reverts commit 2e209c30cf.
2022-10-06 18:35:40 +02:00
Anton Lozhkov 2e209c30cf
[v0.4.0] Temporarily remove Flax modules from the public API (#755)
Temporarily remove Flax modules from the public API
2022-10-06 18:10:36 +02:00
Patrick von Platen d9c449ea30
Custome Pipelines (#744)
* [Custom Pipelines]

* uP

* make style

* finish

* finish

* remove ipdb

* upload

* fix

* finish docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>

* finish

* final uploads

* remove unnecessary test

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>
2022-10-06 16:54:02 +02:00
Patrick von Platen 4deb16e830
[Docs] Advertise fp16 instead of autocast (#740)
up
2022-10-05 22:20:53 +02:00
Suraj Patil 19e559d5e9
remove use_auth_token from remaining places (#737)
remove use_auth_token
2022-10-05 17:40:49 +02:00
Patrick von Platen 78744b6a8f
No more use_auth_token=True (#733)
* up

* uP

* uP

* make style

* Apply suggestions from code review

* up

* finish
2022-10-05 17:16:15 +02:00
Kashif Rasul 726aba089d
[Pytorch] pytorch only timesteps (#724)
* pytorch timesteps

* style

* get rid of if-else

* fix test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-05 12:55:51 +02:00
Yuta Hayashibe 7e92c5bc73
Fix typos (#718)
* Fix typos

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:22:14 +02:00
Nouamane Tazi daa22050c7
[docs] fix table in fp16.mdx (#683) 2022-09-30 15:15:22 +02:00
Nouamane Tazi 9ebaea545f
Optimize Stable Diffusion (#371)
* initial commit

* make UNet stream capturable

* try to fix noise_pred value

* remove cuda graph and keep NB

* non blocking unet with PNDMScheduler

* make timesteps np arrays for pndm scheduler
because lists don't get formatted to tensors in `self.set_format`

* make max async in pndm

* use channel last format in unet

* avoid moving timesteps device in each unet call

* avoid memcpy op in `get_timestep_embedding`

* add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`

* update TODO

* replace `channels_last` kwarg with `memory_format` for more generality

* revert the channels_last changes to leave it for another PR

* remove non_blocking when moving input ids to device

* remove blocking from all .to() operations at beginning of pipeline

* fix merging

* fix merging

* model can run in other precisions without autocast

* attn refactoring

* Revert "attn refactoring"

This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.

* remove restriction to run conv_norm in fp32

* use `baddbmm` instead of `matmul`for better in attention for better perf

* removing all reshapes to test perf

* Revert "removing all reshapes to test perf"

This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.

* add shapes comments

* hardcore whats needed for jitting

* Revert "hardcore whats needed for jitting"

This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.

* Revert "remove restriction to run conv_norm in fp32"

This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.

* revert using baddmm in attention's forward

* cleanup comment

* remove restriction to run conv_norm in fp32. no quality loss was noticed

This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.

* add more optimizations techniques to docs

* Revert "add shapes comments"

This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.

* apply suggestions

* make quality

* apply suggestions

* styling

* `scheduler.timesteps` are now arrays so we dont need .to()

* remove useless .type()

* use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`

* move scheduler timestamps to correct device if tensors

* add device to `set_timesteps` in LMSD scheduler

* `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it

* quick fix

* styling

* remove kwargs from schedulers `set_timesteps`

* revert to using max in K-LMS inpaint pipeline test

* Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"

This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.

* move timesteps to correct device before loop in SD pipeline

* apply previous fix to other SD pipelines

* UNet now accepts tensor timesteps even on wrong device, to avoid errors
- it shouldnt affect performance if timesteps are alrdy on correct device
- it does slow down performance if they're on the wrong device

* fix pipeline when timesteps are arrays with strides
2022-09-30 09:49:13 +02:00
Tanishq Abraham f5b9bc8b49
Update index.mdx (#670) 2022-09-29 09:17:52 +02:00
Kashif Rasul bd8df2da89
[Pytorch] Pytorch only schedulers (#534)
* pytorch only schedulers

* fix style

* remove match_shape

* pytorch only ddpm

* remove SchedulerMixin

* remove numpy from karras_ve

* fix types

* remove numpy from lms_discrete

* remove numpy from pndm

* fix typo

* remove mixin and numpy from sde_vp and ve

* remove remaining tensor_format

* fix style

* sigmas has to be torch tensor

* removed set_format in readme

* remove set format from docs

* remove set_format from pipelines

* update tests

* fix typo

* continue to use mixin

* fix imports

* removed unsed imports

* match shape instead of assuming image shapes

* remove import typo

* update call to add_noise

* use math instead of numpy

* fix t_index

* removed commented out numpy tests

* timesteps needs to be discrete

* cast timesteps to int in flax scheduler too

* fix device mismatch issue

* small fix

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Younes Belkada 8b0be93596
Flax documentation (#589)
* documenting `attention_flax.py` file

* documenting `embeddings_flax.py`

* documenting `unet_blocks_flax.py`

* Add new objs to doc page

* document `vae_flax.py`

* Apply suggestions from code review

* modify `unet_2d_condition_flax.py`

* make style

* Apply suggestions from code review

* make style

* Apply suggestions from code review

* fix indent

* fix typo

* fix indent unet

* Update src/diffusers/models/vae_flax.py

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-23 13:24:16 +02:00
Ryan Russell df80ccf7de
docs: `.md` readability fixups (#619)
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-23 12:02:27 +02:00
Ryan Russell f149d037de
docs: fix `stochastic_karras_ve` ref (#618)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-22 18:36:29 +02:00
Yuta Hayashibe 76d492ea49
Fix typos and add Typo check GitHub Action (#483)
* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-16 15:36:51 +02:00
Jithin James ab7a78e8f1
docs: bocken doc links for relative links (#504)
fix: bocken doc links for relative links
2022-09-14 00:50:02 +02:00
Nathan Lambert 25a51b63ca
fix table formatting for stable diffusion pipeline doc (add blank line) (#471)
fix table formatting (add blank line)
2022-09-12 10:28:27 +02:00
Partho 8eaaa546d8
Docs: fix installation typo (#453)
installation doc typo fix
2022-09-09 15:17:17 -06:00
Patrick von Platen 44968e4204
[Docs] Correct links (#432) 2022-09-08 21:29:24 +02:00
Patrick von Platen 1e98723e12 finish 2022-09-08 17:47:54 +02:00
Patrick von Platen 4e2c1f3a4d
Add config docs (#429)
* advance

* finish

* finish
2022-09-08 17:46:03 +02:00
Kashif Rasul 5e6417e988
[Docs] Models (#416)
* docs for attention

* types for embeddings

* unet2d docstrings

* UNet2DConditionModel docstrings

* fix typos

* style and vq-vae docstrings

* docstrings  for VAE

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* added inherits from sentence

* docstring to forward

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish model docs

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-08 17:28:11 +02:00