Patrick von Platen
b2b3b1a8ab
[Black] Update black ( #433 )
...
* Update black
* update table
2022-09-08 22:10:01 +02:00
Kashif Rasul
44091d8b2a
Score sde ve doc ( #400 )
...
* initial score_sde_ve docs
* fixed typo
* fix VE term
2022-09-07 18:34:34 +02:00
Suraj Patil
ac84c2fa5a
[textual-inversion] fix saving embeds ( #387 )
...
fix saving embeds
2022-09-07 15:49:16 +05:30
apolinario
7bd50cabaf
Add colab links to textual inversion ( #375 )
2022-09-06 22:23:02 +05:30
Patrick von Platen
cc59b05635
[ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples ( #334 )
...
* add outputs for models
* add for pipelines
* finish schedulers
* better naming
* adapt tests as well
* replace dict access with . access
* make schedulers works
* finish
* correct readme
* make bcp compatible
* up
* small fix
* finish
* more fixes
* more fixes
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/vae.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Adapt model outputs
* Apply more suggestions
* finish examples
* correct
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Suraj Patil
55d6453fce
[textual_inversion] use tokenizer.add_tokens to add placeholder_token ( #357 )
...
use add_tokens
2022-09-05 13:12:49 +05:30
Suraj Patil
30e7c78ac3
Update README.md
2022-09-02 14:29:27 +05:30
Suraj Patil
d0d3e24ec1
Textual inversion ( #266 )
...
* add textual inversion script
* make the loop work
* make coarse_loss optional
* save pipeline after training
* add arg pretrained_model_name_or_path
* fix saving
* fix gradient_accumulation_steps
* style
* fix progress bar steps
* scale lr
* add argument to accept style
* remove unused args
* scale lr using num gpus
* load tokenizer using args
* add checks when converting init token to id
* improve commnets and style
* document args
* more cleanup
* fix default adamw arsg
* TextualInversionWrapper -> CLIPTextualInversionWrapper
* fix tokenizer loading
* Use the CLIPTextModel instead of wrapper
* clean dataset
* remove commented code
* fix accessing grads for multi-gpu
* more cleanup
* fix saving on multi-GPU
* init_placeholder_token_embeds
* add seed
* fix flip
* fix multi-gpu
* add utility methods in wrapper
* remove ipynb
* don't use wrapper
* dont pass vae an dunet to accelerate prepare
* bring back accelerator.accumulate
* scale latents
* use only one progress bar for steps
* push_to_hub at the end of training
* remove unused args
* log some important stats
* store args in tensorboard
* pretty comments
* save the trained embeddings
* mobe the script up
* add requirements file
* more cleanup
* fux typo
* begin readme
* style -> learnable_property
* keep vae and unet in eval mode
* address review comments
* address more comments
* removed unused args
* add train command in readme
* update readme
2022-09-02 14:23:52 +05:30
Suraj Patil
1b1d6444c6
[train_unconditional] fix gradient accumulation. ( #308 )
...
fix grad accum
2022-09-01 16:02:15 +02:00
Patrick von Platen
a4d5b59f13
Refactor Pipelines / Community pipelines and add better explanations. ( #257 )
...
* [Examples readme]
* Improve
* more
* save
* save
* save more
* up
* up
* Apply suggestions from code review
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
* make deterministic
* up
* better
* up
* add generator to img2img pipe
* save
* make pipelines deterministic
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* apply all changes
* more correctnios
* finish
* improve table
* more fixes
* up
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Update src/diffusers/pipelines/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* add better links
* fix more
* finish
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 18:43:42 +02:00
Anton Lozhkov
efa773afd2
Support K-LMS in img2img ( #270 )
...
* Support K-LMS in img2img
* Apply review suggestions
2022-08-29 17:17:05 +02:00
Pulkit Mishra
16172c1c7e
Adds missing torch imports to inpainting and image_to_image example ( #265 )
...
adds missing torch import to example
2022-08-29 10:56:37 +02:00
Evita
28f730520e
Fix typo in README.md ( #260 )
2022-08-26 18:54:45 -07:00
Suraj Patil
5cbed8e0d1
Fix inpainting script ( #258 )
...
* expand latents before the check, style
* update readme
2022-08-26 21:16:43 +05:30
Logan
bb4d605dfc
add inpainting example script ( #241 )
...
* add inpainting
* added proper noising of init_latent as reccommened by jackloomen (https://github.com/huggingface/diffusers/pull/241#issuecomment-1226283542 )
* move image preprocessing inside pipeline and allow non 512x512 mask
2022-08-26 20:32:46 +05:30
Pedro Cuenca
bfe37f3159
Reproducible images by supplying latents to pipeline ( #247 )
...
* Accept latents as input for StableDiffusionPipeline.
* Notebook to demonstrate reusable seeds (latents).
* More accurate type annotation
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Review comments: move to device, raise instead of assert.
* Actually commit the test notebook.
I had mistakenly pushed an empty file instead.
* Adapt notebook to Colab.
* Update examples readme.
* Move notebook to personal repo.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-25 19:17:05 +05:30
Suraj Patil
511bd3aaf2
[example/image2image] raise error if strength is not in desired range ( #238 )
...
raise error if strength is not in desired range
2022-08-23 19:52:52 +05:30
Suraj Patil
4674fdf807
Add image2image example script. ( #231 )
...
* boom boom
* reorganise examples
* add image2image in example inference
* add readme
* fix example
* update colab url
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix init_timestep
* update colab url
* update main readme
* rename readme
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-08-23 16:27:28 +05:30
Anton Lozhkov
eeb9264acd
Support training with a local image folder ( #152 )
...
* Support training with an image folder
* style
2022-08-03 15:25:00 +02:00
Anton Lozhkov
cde0ed162a
Add a step about accelerate config to the examples ( #130 )
2022-07-22 13:48:26 +02:00
John Haugeland
85244d4a59
Documentation cross-reference ( #127 )
...
In https://github.com/huggingface/diffusers/issues/124 I incorrectly suggested that the image set creation process was undocumented. In reality, I just hadn't located it. @patrickvonplaten did so for me.
This PR places a hotlink so that people like me can be shoehorned over where they needed to be.
2022-07-21 21:46:15 +02:00
Anton Lozhkov
a487b5095a
Update images
2022-07-21 17:11:36 +02:00
anton-l
a73ae3e5b0
Better default for AdamW
2022-07-21 13:36:16 +02:00
anton-l
06505ba4b4
Less eval steps during training
2022-07-21 11:47:40 +02:00
anton-l
302b86bd0b
Adapt training to the new UNet API
2022-07-21 11:07:21 +02:00
Anton Lozhkov
76f9b52289
Update the training examples ( #102 )
...
* New unet, gradient accumulation
* Save every n epochs
* Remove find_unused_params, hooray!
* Update examples
* Switch to DDPM completely
2022-07-20 19:51:23 +02:00
Anton Lozhkov
d9316bf8bc
Fix mutable proj_out weight in the Attention layer ( #73 )
...
* Catch unused params in DDP
* Fix proj_out, add test
2022-07-04 12:36:37 +02:00
Tanishq Abraham
3abf4bc439
EMA model stepping updated to keep track of current step ( #64 )
...
ema model stepping done automatically now
2022-07-04 11:53:15 +02:00
Anton Lozhkov
8cba133f36
Add the model card template ( #43 )
...
* add a metrics logger
* fix LatentDiffusionUncondPipeline
* add VQModel in init
* add image logging to tensorboard
* switch manual templates to the modelcards package
* hide ldm example
Co-authored-by: patil-suraj <surajp815@gmail.com>
2022-06-29 15:37:23 +02:00
Patrick von Platen
932ce05d97
cancel einops
2022-06-27 15:39:41 +00:00
anton-l
07ff0abff4
Glide and LDM training experiments
2022-06-27 17:25:59 +02:00
anton-l
1cf7933ea2
Framework-agnostic timestep broadcasting
2022-06-27 17:11:01 +02:00
anton-l
3f9e3d8ad6
add EMA during training
2022-06-27 15:23:01 +02:00
anton-l
c31736a4a4
Merge remote-tracking branch 'origin/main'
...
# Conflicts:
# src/diffusers/pipelines/pipeline_glide.py
2022-06-22 15:17:10 +02:00
anton-l
7b43035bcb
init text2im script
2022-06-22 15:15:54 +02:00
Anton Lozhkov
33abc79515
Update README.md
2022-06-22 13:52:45 +02:00
anton-l
848c86ca0a
batched forward diffusion step
2022-06-22 13:38:14 +02:00
anton-l
9e31c6a749
refactor GLIDE text2im pipeline, remove classifier_free_guidance
2022-06-21 14:07:58 +02:00
anton-l
71289ba06e
add lr schedule utils
2022-06-21 11:35:56 +02:00
anton-l
0417baf23d
additional hub arguments
2022-06-21 11:21:10 +02:00
anton-l
9c82c32ba7
make style
2022-06-21 10:43:40 +02:00
anton-l
a2117cb797
add push_to_hub
2022-06-21 10:38:34 +02:00
Manuel Romero
57aba1ef50
Fix output path name
2022-06-15 21:45:49 +02:00
Anton Lozhkov
1112699149
add a training examples doc
2022-06-15 16:51:37 +02:00
Patrick von Platen
fb9e37adf6
correct logging
2022-06-15 15:52:23 +02:00
anton-l
84bd65bced
Merge remote-tracking branch 'origin/main'
2022-06-15 14:37:04 +02:00
anton-l
0deeb06aac
better defaults
2022-06-15 14:36:43 +02:00
Patrick von Platen
17c574a16d
remove torchvision dependency
2022-06-15 12:35:47 +02:00
anton-l
cfe6eb1611
Training example parameterization
2022-06-15 11:21:02 +02:00
anton-l
7fe05bb311
Bugfixes for the training example
2022-06-14 18:25:22 +02:00