Patrick von Platen
29b2c93c90
Make repo structure consistent ( #1862 )
...
* move files a bit
* more refactors
* fix more
* more fixes
* fix more onnx
* make style
* upload
* fix
* up
* fix more
* up again
* up
* small fix
* Update src/diffusers/__init__.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 11:51:08 +01:00
Simon Kirsten
ab0e92fdc8
Flax: Fix img2img and align with other pipeline ( #1824 )
...
* Flax: Add components function
* Flax: Fix img2img and align with other pipeline
* Flax: Fix PRNGKey type
* Refactor strength to start_timestep
* Fix preprocess images
* Fix processed_images dimen
* latents.shape -> latents_shape
* Fix typo
* Remove "static" comment
* Remove unnecessary optional types in _generate
* Apply doc-builder code style.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-29 18:56:03 +01:00
Suraj Patil
9ea7052f0e
[textual inversion] add gradient checkpointing and small fixes. ( #1848 )
...
Co-authored-by: Henrik Forstén <henrik.forsten@gmail.com>
* update TI script
* make flake happy
* fix typo
2022-12-29 15:02:29 +01:00
Patrick von Platen
03bf877bf4
[StableDiffusionInpaint] Correct test ( #1859 )
2022-12-29 14:47:56 +01:00
Patrick von Platen
f2e521c499
[Dtype] Align dtype casting behavior with Transformers and Accelerate ( #1725 )
...
* [Dtype] Align automatic dtype
* up
* up
* fix
* re-add accelerate
2022-12-29 14:36:02 +01:00
Patrick von Platen
debc74f442
[Versatile Diffusion] Fix cross_attention_kwargs ( #1849 )
...
fix versatile
2022-12-28 18:49:04 +01:00
Partho
2ba42aa9b1
[Community Pipeline] MagicMix ( #1839 )
...
* initial
* type hints
* update scheduler type hint
* add to README
* add example generation to README
* v -> mix_factor
* load scheduler from pretrained
2022-12-28 17:02:53 +01:00
Will Berman
53c8147afe
unCLIP image variation ( #1781 )
...
* unCLIP image variation
* remove prior comment re: @pcuenca
* stable diffusion -> unCLIP re: @pcuenca
* add copy froms re: @patil-suraj
2022-12-28 14:17:09 +01:00
kabachuha
cf5265ad41
Allow selecting precision to make Dreambooth class images ( #1832 )
...
* allow selecting precision to make DB class images
addresses #1831
* add prior_generation_precision argument
* correct prior_generation_precision's description
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-27 19:51:32 +01:00
Katsuya
8874027efc
Make xformers optional even if it is available ( #1753 )
...
* Make xformers optional even if it is available
* Raise exception if xformers is used but not available
* Rename use_xformers to enable_xformers_memory_efficient_attention
* Add a note about xformers in README
* Reformat code style
2022-12-27 19:47:50 +01:00
Christopher Friesen
b693aff795
fix: resize transform now preserves aspect ratio ( #1804 )
2022-12-27 15:10:25 +01:00
William Held
8a4c3e50bd
Width was typod as weight ( #1800 )
...
* Width was typod as weight
* Run Black
2022-12-27 15:09:21 +01:00
Pedro Cuenca
68e24259af
Avoid duplicating PyTorch + safetensors downloads. ( #1836 )
2022-12-27 14:58:15 +01:00
camenduru
1f1b6c6544
Device to use (e.g. cpu, cuda:0, cuda:1, etc.) ( #1844 )
...
* Device to use (e.g. cpu, cuda:0, cuda:1, etc.)
* "cuda" if torch.cuda.is_available() else "cpu"
2022-12-27 14:42:56 +01:00
Pedro Cuenca
df2b548e89
Make safety_checker optional in more pipelines ( #1796 )
...
* Make safety_checker optional in more pipelines.
* Remove inappropriate comment in inpaint pipeline.
* InPaint Test: set feature_extractor to None.
* Remove import
* img2img test: set feature_extractor to None.
* inpaint sd2 test: set feature_extractor to None.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Daquan Lin
b6d4702301
fix small mistake in annotation: 32 -> 64 ( #1780 )
...
Fix inconsistencies between code and comments in the function 'preprocess'
2022-12-24 19:56:57 +01:00
Suraj Patil
9be94d9c66
[textual_inversion] unwrap_model text encoder before accessing weights ( #1816 )
...
* unwrap_model text encoder before accessing weights
* fix another call
* fix the right call
2022-12-23 16:46:24 +01:00
Patrick von Platen
f2acfb67ac
Remove hardcoded names from PT scripts ( #1778 )
...
* Remove hardcoded names from PT scripts
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-23 15:36:29 +01:00
Prathik Rao
8aa4372aea
reorder model wrap + bug fix ( #1799 )
...
* reorder model wrap
* bug fix
Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
2022-12-22 14:51:47 +01:00
Pedro Cuenca
6043838971
Fix OOM when using PyTorch with JAX installed. ( #1795 )
...
Don't initialize Jax on startup.
2022-12-21 14:07:24 +01:00
Patrick von Platen
4125756e88
Refactor cross attention and allow mechanism to tweak cross attention function ( #1639 )
...
* first proposal
* rename
* up
* Apply suggestions from code review
* better
* up
* finish
* up
* rename
* correct versatile
* up
* up
* up
* up
* fix
* Apply suggestions from code review
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add error message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Dhruv Naik
a9190badf7
Add Flax stable diffusion img2img pipeline ( #1355 )
...
* add flax img2img pipeline
* update pipeline
* black format file
* remove argg from get_timesteps
* update get_timesteps
* fix bug: make use of timesteps for for_loop
* black file
* black, isort, flake8
* update docstring
* update readme
* update flax img2img readme
* update sd pipeline init
* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* update inits
* revert change
* update var name to image, typo
* update readme
* return new t_start instead of modified timestep
* black format
* isort files
* update docs
* fix-copies
* update prng_seed typing
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:25:08 +01:00
Suraj Patil
d07f73003d
Fix num images per prompt unclip ( #1787 )
...
* use repeat_interleave
* fix repeat
* Trigger Build
* don't install accelerate from main
* install released accelrate for mps test
* Remove additional accelerate installation from main.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 16:03:38 +01:00
Pedro Cuenca
a6fb9407fd
Dreambooth docs: minor fixes ( #1758 )
...
* Section header for in-painting, inference from checkpoint.
* Inference: link to section to perform inference from checkpoint.
* Move Dreambooth in-painting instructions to the proper place.
2022-12-20 08:39:16 +01:00
Patrick von Platen
261a448c6a
Correct hf hub download ( #1767 )
...
* allow model download when no internet
* up
* make style
2022-12-20 02:07:15 +01:00
Simon Kirsten
f106ab40b3
[Flax] Stateless schedulers, fixes and refactors ( #1661 )
...
* [Flax] Stateless schedulers, fixes and refactors
* Remove scheduling_common_flax and some renames
* Update src/diffusers/schedulers/scheduling_pndm_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Emil Bogomolov
d87cc15977
expose polynomial:power and cosine_with_restarts:num_cycles params ( #1737 )
...
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py
* fix formatting
* fix style
* Update src/diffusers/optimization.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:41:37 +01:00
Patrick von Platen
e29dc97215
make style
2022-12-20 01:38:45 +01:00
Ilmari Heikkinen
8e4733b3c3
Only test for xformers when enabling them #1773 ( #1776 )
...
* only check for xformers when xformers are enabled
* only test for xformers when enabling them
2022-12-20 01:38:28 +01:00
Prathik Rao
847daf25c7
update train_unconditional_ort.py ( #1775 )
...
* reflect changes
* run make style
Co-authored-by: Prathik Rao <prathikrao@microsoft.com>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-19 23:58:55 +01:00
Pedro Cuenca
9f8c915a75
[Dreambooth] flax fixes ( #1765 )
...
* Fail if there are less images than the effective batch size.
* Remove lr-scheduler arg as it's currently ignored.
* Make guidance_scale work for batch_size > 1.
2022-12-19 20:42:25 +01:00
Anton Lozhkov
8331da4683
Bump to 0.12.0.dev0 ( #1771 )
2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa
[Tests] Fix UnCLIP cpu offload tests ( #1769 )
2022-12-19 18:25:08 +01:00
Nan Liu
6f15026330
update composable diffusion for an updated diffuser library ( #1697 )
...
* update composable diffusion for an updated diffuser library
* fix style/quality for code
* Revert "fix style/quality for code"
This reverts commit 71f23497639fe69de00d93cf91edc31b08dcd7a4.
* update style
* reduce memory usage by computing score sequentially
2022-12-19 18:03:40 +01:00
anton-
a5edb981a7
[Patch] Return import for the unclip pipeline loader
2022-12-19 17:56:42 +01:00
anton-
54796b7e43
Release: v0.11.0
2022-12-19 17:43:22 +01:00
Anton Lozhkov
4cb887e0a7
Transformers version req for UnCLIP ( #1766 )
...
* Transformers version req for UnCLIP
* add to the list
2022-12-19 17:11:17 +01:00
Anish Shah
9f657f106d
[Examples] Update train_unconditional.py to include logging argument for Wandb ( #1719 )
...
Update train_unconditional.py
Add logger flag to choose between tensorboard and wandb
2022-12-19 16:57:03 +01:00
Patrick von Platen
ce1c27adc8
[Revision] Don't recommend using revision ( #1764 )
2022-12-19 16:25:41 +01:00
Patrick von Platen
b267d28566
[Versatile] fix attention mask ( #1763 )
2022-12-19 15:58:39 +01:00
Anton Lozhkov
c7b4acfb37
Add CPU offloading to UnCLIP ( #1761 )
...
* Add CPU offloading to UnCLIP
* use fp32 for testing the offload
2022-12-19 14:44:08 +01:00
Suraj Patil
be38b2d711
[UnCLIPPipeline] fix num_images_per_prompt ( #1762 )
...
duplicate maks for num_images_per_prompt
2022-12-19 14:32:46 +01:00
Anton Lozhkov
32a5d70c42
Support attn2==None for xformers ( #1759 )
2022-12-19 12:43:30 +01:00
Patrick von Platen
429e5449c1
Add attention mask to uclip ( #1756 )
...
* Remove bogus file
* [Unclip] Add efficient attention
* [Unclip] Add efficient attention
2022-12-19 12:10:46 +01:00
Anton Lozhkov
dc7cd893fd
Add resnet_time_scale_shift to VD layers ( #1757 )
2022-12-19 12:01:46 +01:00
Mikołaj Siedlarek
8890758823
Correct help text for scheduler_type flag in scripts. ( #1749 )
2022-12-19 11:27:23 +01:00
Will Berman
b25843e799
unCLIP docs ( #1754 )
...
* [unCLIP docs] markdown
* [unCLIP docs] UnCLIPPipeline
2022-12-19 10:27:32 +01:00
Will Berman
830a9d1f01
[fix] pipeline_unclip generator ( #1751 )
...
* [fix] pipeline_unclip generator
pass generator to all schedulers
* fix fast tests test data
2022-12-19 10:27:18 +01:00
Will Berman
2dcf64b72a
kakaobrain unCLIP ( #1428 )
...
* [wip] attention block updates
* [wip] unCLIP unet decoder and super res
* [wip] unCLIP prior transformer
* [wip] scheduler changes
* [wip] text proj utility class
* [wip] UnCLIPPipeline
* [wip] kakaobrain unCLIP convert script
* [unCLIP pipeline] fixes re: @patrickvonplaten
remove callbacks
move denoising loops into call function
* UNCLIPScheduler re: @patrickvonplaten
Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
DDPM scheduler with changes to support karlo
* mask -> attention_mask re: @patrickvonplaten
* [DDPMScheduler] remove leftover change
* [docs] PriorTransformer
* [docs] UNet2DConditionModel and UNet2DModel
* [nit] UNCLIPScheduler -> UnCLIPScheduler
matches existing unclip naming better
* [docs] SchedulingUnCLIP
* [docs] UnCLIPTextProjModel
* refactor
* finish licenses
* rename all to attention_mask and prep in models
* more renaming
* don't expose unused configs
* final renaming fixes
* remove x attn mask when not necessary
* configure kakao script to use new class embedding config
* fix copies
* [tests] UnCLIPScheduler
* finish x attn
* finish
* remove more
* rename condition blocks
* clean more
* Apply suggestions from code review
* up
* fix
* [tests] UnCLIPPipelineFastTests
* remove unused imports
* [tests] UnCLIPPipelineIntegrationTests
* correct
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-18 15:15:30 -08:00
Patrick von Platen
402b9560b2
Remove license accept ticks
2022-12-19 00:10:17 +01:00