camenduru
1f1b6c6544
Device to use (e.g. cpu, cuda:0, cuda:1, etc.) ( #1844 )
...
* Device to use (e.g. cpu, cuda:0, cuda:1, etc.)
* "cuda" if torch.cuda.is_available() else "cpu"
2022-12-27 14:42:56 +01:00
Pedro Cuenca
df2b548e89
Make safety_checker optional in more pipelines ( #1796 )
...
* Make safety_checker optional in more pipelines.
* Remove inappropriate comment in inpaint pipeline.
* InPaint Test: set feature_extractor to None.
* Remove import
* img2img test: set feature_extractor to None.
* inpaint sd2 test: set feature_extractor to None.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Daquan Lin
b6d4702301
fix small mistake in annotation: 32 -> 64 ( #1780 )
...
Fix inconsistencies between code and comments in the function 'preprocess'
2022-12-24 19:56:57 +01:00
Suraj Patil
9be94d9c66
[textual_inversion] unwrap_model text encoder before accessing weights ( #1816 )
...
* unwrap_model text encoder before accessing weights
* fix another call
* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen
f2acfb67ac
Remove hardcoded names from PT scripts ( #1778 )
...
* Remove hardcoded names from PT scripts
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
Prathik Rao
8aa4372aea
reorder model wrap + bug fix ( #1799 )
...
* reorder model wrap
* bug fix
Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Pedro Cuenca
6043838971
Fix OOM when using PyTorch with JAX installed. ( #1795 )
...
Don't initialize Jax on startup.
2022-12-21 14:07:24 +01:00
Patrick von Platen
4125756e88
Refactor cross attention and allow mechanism to tweak cross attention function ( #1639 )
...
* first proposal
* rename
* up
* Apply suggestions from code review
* better
* up
* finish
* up
* rename
* correct versatile
* up
* up
* up
* up
* fix
* Apply suggestions from code review
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add error message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Dhruv Naik
a9190badf7
Add Flax stable diffusion img2img pipeline ( #1355 )
...
* add flax img2img pipeline
* update pipeline
* black format file
* remove argg from get_timesteps
* update get_timesteps
* fix bug: make use of timesteps for for_loop
* black file
* black, isort, flake8
* update docstring
* update readme
* update flax img2img readme
* update sd pipeline init
* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* update inits
* revert change
* update var name to image, typo
* update readme
* return new t_start instead of modified timestep
* black format
* isort files
* update docs
* fix-copies
* update prng_seed typing
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:25:08 +01:00
Suraj Patil
d07f73003d
Fix num images per prompt unclip ( #1787 )
...
* use repeat_interleave
* fix repeat
* Trigger Build
* don't install accelerate from main
* install released accelrate for mps test
* Remove additional accelerate installation from main.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:03:38 +01:00
Pedro Cuenca
a6fb9407fd
Dreambooth docs: minor fixes ( #1758 )
...
* Section header for in-painting, inference from checkpoint.
* Inference: link to section to perform inference from checkpoint.
* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen
261a448c6a
Correct hf hub download ( #1767 )
...
* allow model download when no internet
* up
* make style
2022-12-20 02:07:15 +01:00
Simon Kirsten
f106ab40b3
[Flax] Stateless schedulers, fixes and refactors ( #1661 )
...
* [Flax] Stateless schedulers, fixes and refactors
* Remove scheduling_common_flax and some renames
* Update src/diffusers/schedulers/scheduling_pndm_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Emil Bogomolov
d87cc15977
expose polynomial:power and cosine_with_restarts:num_cycles params ( #1737 )
...
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py
* fix formatting
* fix style
* Update src/diffusers/optimization.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:41:37 +01:00
Patrick von Platen
e29dc97215
make style
2022-12-20 01:38:45 +01:00
Ilmari Heikkinen
8e4733b3c3
Only test for xformers when enabling them #1773 ( #1776 )
...
* only check for xformers when xformers are enabled
* only test for xformers when enabling them
2022-12-20 01:38:28 +01:00
Prathik Rao
847daf25c7
update train_unconditional_ort.py ( #1775 )
...
* reflect changes
* run make style
Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Pedro Cuenca
9f8c915a75
[Dreambooth] flax fixes ( #1765 )
...
* Fail if there are less images than the effective batch size.
* Remove lr-scheduler arg as it's currently ignored.
* Make guidance_scale work for batch_size > 1.
2022-12-19 20:42:25 +01:00
Anton Lozhkov
8331da4683
Bump to 0.12.0.dev0 ( #1771 )
2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa
[Tests] Fix UnCLIP cpu offload tests ( #1769 )
2022-12-19 18:25:08 +01:00
Nan Liu
6f15026330
update composable diffusion for an updated diffuser library ( #1697 )
...
* update composable diffusion for an updated diffuser library
* fix style/quality for code
* Revert "fix style/quality for code"
This reverts commit 71f23497639fe69de00d93cf91edc31b08dcd7a4.
* update style
* reduce memory usage by computing score sequentially
2022-12-19 18:03:40 +01:00
anton-
a5edb981a7
[Patch] Return import for the unclip pipeline loader
2022-12-19 17:56:42 +01:00
anton-
54796b7e43
Release: v0.11.0
2022-12-19 17:43:22 +01:00
Anton Lozhkov
4cb887e0a7
Transformers version req for UnCLIP ( #1766 )
...
* Transformers version req for UnCLIP
* add to the list
2022-12-19 17:11:17 +01:00
Anish Shah
9f657f106d
[Examples] Update train_unconditional.py to include logging argument for Wandb ( #1719 )
...
Update train_unconditional.py
Add logger flag to choose between tensorboard and wandb
2022-12-19 16:57:03 +01:00
Patrick von Platen
ce1c27adc8
[Revision] Don't recommend using revision ( #1764 )
2022-12-19 16:25:41 +01:00
Patrick von Platen
b267d28566
[Versatile] fix attention mask ( #1763 )
2022-12-19 15:58:39 +01:00
Anton Lozhkov
c7b4acfb37
Add CPU offloading to UnCLIP ( #1761 )
...
* Add CPU offloading to UnCLIP
* use fp32 for testing the offload
2022-12-19 14:44:08 +01:00
Suraj Patil
be38b2d711
[UnCLIPPipeline] fix num_images_per_prompt ( #1762 )
...
duplicate maks for num_images_per_prompt
2022-12-19 14:32:46 +01:00
Anton Lozhkov
32a5d70c42
Support attn2==None for xformers ( #1759 )
2022-12-19 12:43:30 +01:00
Patrick von Platen
429e5449c1
Add attention mask to uclip ( #1756 )
...
* Remove bogus file
* [Unclip] Add efficient attention
* [Unclip] Add efficient attention
2022-12-19 12:10:46 +01:00
Anton Lozhkov
dc7cd893fd
Add resnet_time_scale_shift to VD layers ( #1757 )
2022-12-19 12:01:46 +01:00
Mikołaj Siedlarek
8890758823
Correct help text for scheduler_type flag in scripts. ( #1749 )
2022-12-19 11:27:23 +01:00
Will Berman
b25843e799
unCLIP docs ( #1754 )
...
* [unCLIP docs] markdown
* [unCLIP docs] UnCLIPPipeline
2022-12-19 10:27:32 +01:00
Will Berman
830a9d1f01
[fix] pipeline_unclip generator ( #1751 )
...
* [fix] pipeline_unclip generator
pass generator to all schedulers
* fix fast tests test data
2022-12-19 10:27:18 +01:00
Will Berman
2dcf64b72a
kakaobrain unCLIP ( #1428 )
...
* [wip] attention block updates
* [wip] unCLIP unet decoder and super res
* [wip] unCLIP prior transformer
* [wip] scheduler changes
* [wip] text proj utility class
* [wip] UnCLIPPipeline
* [wip] kakaobrain unCLIP convert script
* [unCLIP pipeline] fixes re: @patrickvonplaten
remove callbacks
move denoising loops into call function
* UNCLIPScheduler re: @patrickvonplaten
Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
DDPM scheduler with changes to support karlo
* mask -> attention_mask re: @patrickvonplaten
* [DDPMScheduler] remove leftover change
* [docs] PriorTransformer
* [docs] UNet2DConditionModel and UNet2DModel
* [nit] UNCLIPScheduler -> UnCLIPScheduler
matches existing unclip naming better
* [docs] SchedulingUnCLIP
* [docs] UnCLIPTextProjModel
* refactor
* finish licenses
* rename all to attention_mask and prep in models
* more renaming
* don't expose unused configs
* final renaming fixes
* remove x attn mask when not necessary
* configure kakao script to use new class embedding config
* fix copies
* [tests] UnCLIPScheduler
* finish x attn
* finish
* remove more
* rename condition blocks
* clean more
* Apply suggestions from code review
* up
* fix
* [tests] UnCLIPPipelineFastTests
* remove unused imports
* [tests] UnCLIPPipelineIntegrationTests
* correct
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-18 15:15:30 -08:00
Patrick von Platen
402b9560b2
Remove license accept ticks
2022-12-19 00:10:17 +01:00
Anton Lozhkov
c2a38ef9df
Fix/update the LDM pipeline and tests ( #1743 )
...
* Fix/update LDM tests
* batched generators
2022-12-18 11:49:53 +01:00
Anton Lozhkov
08cc36ddff
Fix MPS fast test warnings ( #1744 )
...
* unset level
2022-12-17 22:57:30 +01:00
Peter
723e8f6bb4
Fix ONNX img2img preprocessing ( #1736 )
...
Co-authored-by: Peter <peterto@users.noreply.github.com>
2022-12-17 13:12:10 +01:00
Patrick von Platen
c53a850604
[Batched Generators] This PR adds generators that are useful to make batched generation fully reproducible ( #1718 )
...
* [Batched Generators] all batched generators
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* up
* hey
* up again
* fix tests
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct tests
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-17 11:13:16 +01:00
Anton Lozhkov
086c7f9ea8
Nightly integration tests ( #1664 )
...
* [WIP] Nightly integration tests
* initial SD tests
* update SD slow tests
* style
* repaint
* ImageVariations
* style
* finish imgvar
* img2img tests
* debug
* inpaint 1.5
* inpaint legacy
* torch isn't happy about deterministic ops
* allclose -> max diff for shorter logs
* add SD2
* debug
* Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix refs
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix refs
* remove debug
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-16 18:51:11 +01:00
Pedro Cuenca
acd317810b
Docs: recommend xformers ( #1724 )
...
* Fix links to flash attention.
* Add xformers installation instructions.
* Make link to xformers install more prominent.
* Link to xformers install from training docs.
2022-12-16 15:49:01 +01:00
Patrick von Platen
c6d0dff4a3
Fix ldm tests on master by not running the CPU tests on GPU ( #1729 )
2022-12-16 15:28:40 +01:00
Anton Lozhkov
a40095dd22
Fix ONNX img2img preprocessing and add fast tests coverage ( #1727 )
...
* Fix ONNX img2img preprocessing and add fast tests coverage
* revert
* disable progressbars
2022-12-16 15:24:16 +01:00
Partho
727434c206
Accept latents as optional input in Latent Diffusion pipeline ( #1723 )
...
* Latent Diffusion pipeline accept latents
* make style
* check for mps
randn does not work reproducibly on mps
2022-12-16 12:13:41 +01:00
YiYi Xu
21e61eb3a9
Added a README page for docs and a "schedulers" page ( #1710 )
...
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-15 13:04:40 -10:00
Haihao Shen
c891330f79
Add examples with Intel optimizations ( #1579 )
...
* Add examples with Intel optimizations (BF16 fine-tuning and inference)
* Remove unused package
* Add README for intel_opts and refine the description for research projects
* Add notes of intel opts for diffusers
2022-12-15 21:16:27 +01:00
jiqing-feng
c5f04d4e34
apply amp bf16 on textual inversion ( #1465 )
...
* add conf.yaml
* enable bf16
enable amp bf16 for unet forward
fix style
fix readme
remove useless file
* change amp to full bf16
* align
* make stype
* fix format
2022-12-15 21:15:23 +01:00
CyberMeow
61dec53356
Improve pipeline_stable_diffusion_inpaint_legacy.py ( #1585 )
...
* update inpaint_legacy to allow the use of predicted noise to construct intermediate diffused images
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-15 20:59:31 +01:00