Patrick von Platen
78744b6a8f
No more use_auth_token=True ( #733 )
...
* up
* uP
* uP
* make style
* Apply suggestions from code review
* up
* finish
2022-10-05 17:16:15 +02:00
Pierre LeMoine
08d4fb6e9f
[dreambooth] Using already created `Path` in dataset ( #681 )
...
using already created `Path` in dataset
2022-10-05 12:14:30 +02:00
Suraj Patil
14b9754923
[train_unconditional] fix applying clip_grad_norm_ ( #721 )
...
fix clip_grad_norm_
2022-10-04 19:04:05 +02:00
Yuta Hayashibe
7e92c5bc73
Fix typos ( #718 )
...
* Fix typos
* Update examples/dreambooth/train_dreambooth.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-04 15:22:14 +02:00
Patrick von Platen
f1484b81b0
[Utils] Add deprecate function and move testing_utils under utils ( #659 )
...
* [Utils] Add deprecate function
* up
* up
* uP
* up
* up
* up
* up
* uP
* up
* fix
* up
* move to deprecation utils file
* fix
* fix
* fix more
2022-10-03 23:44:24 +02:00
Suraj Patil
14f4af8f5b
[dreambooth] fix applying clip_grad_norm_ ( #686 )
...
fix applying clip grad norm
2022-10-03 10:54:01 +02:00
Suraj Patil
210be4fe71
[examples] update transfomers version ( #665 )
...
update transfomrers version in example
2022-09-29 11:16:28 +02:00
Suraj Patil
c16761e9d9
[CLIPGuidedStableDiffusion] take the correct text embeddings ( #667 )
...
take the correct text embeddings
2022-09-28 17:41:34 +02:00
Isamu Isozaki
7f31142c2e
Added script to save during textual inversion training. Issue 524 ( #645 )
...
* Added script to save during training
* Suggested changes
2022-09-28 17:26:02 +02:00
Suraj Patil
c0c98df9a1
[CLIPGuidedStableDiffusion] remove set_format from pipeline ( #653 )
...
remove set_format from pipeline
2022-09-27 18:56:47 +02:00
Suraj Patil
e5eed5235b
[dreambooth] update install section ( #650 )
...
update install section
2022-09-27 17:32:21 +02:00
Suraj Patil
ac665b6484
[examples/dreambooth] don't pass tensor_format to scheduler. ( #649 )
...
don't pass tensor_format
2022-09-27 17:24:12 +02:00
Kashif Rasul
bd8df2da89
[Pytorch] Pytorch only schedulers ( #534 )
...
* pytorch only schedulers
* fix style
* remove match_shape
* pytorch only ddpm
* remove SchedulerMixin
* remove numpy from karras_ve
* fix types
* remove numpy from lms_discrete
* remove numpy from pndm
* fix typo
* remove mixin and numpy from sde_vp and ve
* remove remaining tensor_format
* fix style
* sigmas has to be torch tensor
* removed set_format in readme
* remove set format from docs
* remove set_format from pipelines
* update tests
* fix typo
* continue to use mixin
* fix imports
* removed unsed imports
* match shape instead of assuming image shapes
* remove import typo
* update call to add_noise
* use math instead of numpy
* fix t_index
* removed commented out numpy tests
* timesteps needs to be discrete
* cast timesteps to int in flax scheduler too
* fix device mismatch issue
* small fix
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Zhenhuan Liu
3b747de845
Add training example for DreamBooth. ( #554 )
...
* Add training example for DreamBooth.
* Fix bugs.
* Update readme and default hyperparameters.
* Reformatting code with black.
* Update for multi-gpu trianing.
* Apply suggestions from code review
* improgve sampling
* fix autocast
* improve sampling more
* fix saving
* actuallu fix saving
* fix saving
* improve dataset
* fix collate fun
* fix collate_fn
* fix collate fn
* fix key name
* fix dataset
* fix collate fn
* concat batch in collate fn
* add grad ckpt
* add option for 8bit adam
* do two forward passes for prior preservation
* Revert "do two forward passes for prior preservation"
This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.
* add option for prior_loss_weight
* add option for clip grad norm
* add more comments
* update readme
* update readme
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add docstr for dataset
* update the saving logic
* Update examples/dreambooth/README.md
* remove unused imports
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:01:18 +02:00
Abdullah Alfaraj
bb0c5d1595
Fix docs link to train_unconditional.py ( #642 )
...
the link points to an old location of the train_unconditional.py file
2022-09-27 11:23:09 +02:00
Anton Lozhkov
4f1c989ffb
Add smoke tests for the training examples ( #585 )
...
* Add smoke tests for the training examples
* upd
* use a dummy dataset
* mark as slow
* cleanup
* Update test cases
* naming
2022-09-21 13:36:59 +02:00
Suraj Patil
8d36d5adb1
Update clip_guided_stable_diffusion.py
2022-09-19 18:03:00 +02:00
Suraj Patil
dc2a1c1d07
[examples/community] add CLIPGuidedStableDiffusion ( #561 )
...
* add CLIPGuidedStableDiffusion
* add credits
* add readme
* style
* add clip prompt
* fnfix cond_n
* fix cond fn
* fix cond fn for lms
2022-09-19 17:29:19 +02:00
Yuta Hayashibe
76d492ea49
Fix typos and add Typo check GitHub Action ( #483 )
...
* Fix typos
* Add a typo check action
* Fix a bug
* Changed to manual typo check currently
Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Removed a confusing message
* Renamed "nin_shortcut" to "in_shortcut"
* Add memo about NIN
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-09-16 15:36:51 +02:00
Kashif Rasul
b34be039f9
Karras VE, DDIM and DDPM flax schedulers ( #508 )
...
* beta never changes removed from state
* fix typos in docs
* removed unused var
* initial ddim flax scheduler
* import
* added dummy objects
* fix style
* fix typo
* docs
* fix typo in comment
* set return type
* added flax ddom
* fix style
* remake
* pass PRNG key as argument and split before use
* fix doc string
* use config
* added flax Karras VE scheduler
* make style
* fix dummy
* fix ndarray type annotation
* replace returns a new state
* added lms_discrete scheduler
* use self.config
* add_noise needs state
* use config
* use config
* docstring
* added flax score sde ve
* fix imports
* fix typos
2022-09-15 15:55:48 +02:00
Patrick von Platen
b2b3b1a8ab
[Black] Update black ( #433 )
...
* Update black
* update table
2022-09-08 22:10:01 +02:00
Kashif Rasul
44091d8b2a
Score sde ve doc ( #400 )
...
* initial score_sde_ve docs
* fixed typo
* fix VE term
2022-09-07 18:34:34 +02:00
Suraj Patil
ac84c2fa5a
[textual-inversion] fix saving embeds ( #387 )
...
fix saving embeds
2022-09-07 15:49:16 +05:30
apolinario
7bd50cabaf
Add colab links to textual inversion ( #375 )
2022-09-06 22:23:02 +05:30
Patrick von Platen
cc59b05635
[ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples ( #334 )
...
* add outputs for models
* add for pipelines
* finish schedulers
* better naming
* adapt tests as well
* replace dict access with . access
* make schedulers works
* finish
* correct readme
* make bcp compatible
* up
* small fix
* finish
* more fixes
* more fixes
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/vae.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Adapt model outputs
* Apply more suggestions
* finish examples
* correct
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Suraj Patil
55d6453fce
[textual_inversion] use tokenizer.add_tokens to add placeholder_token ( #357 )
...
use add_tokens
2022-09-05 13:12:49 +05:30
Suraj Patil
30e7c78ac3
Update README.md
2022-09-02 14:29:27 +05:30
Suraj Patil
d0d3e24ec1
Textual inversion ( #266 )
...
* add textual inversion script
* make the loop work
* make coarse_loss optional
* save pipeline after training
* add arg pretrained_model_name_or_path
* fix saving
* fix gradient_accumulation_steps
* style
* fix progress bar steps
* scale lr
* add argument to accept style
* remove unused args
* scale lr using num gpus
* load tokenizer using args
* add checks when converting init token to id
* improve commnets and style
* document args
* more cleanup
* fix default adamw arsg
* TextualInversionWrapper -> CLIPTextualInversionWrapper
* fix tokenizer loading
* Use the CLIPTextModel instead of wrapper
* clean dataset
* remove commented code
* fix accessing grads for multi-gpu
* more cleanup
* fix saving on multi-GPU
* init_placeholder_token_embeds
* add seed
* fix flip
* fix multi-gpu
* add utility methods in wrapper
* remove ipynb
* don't use wrapper
* dont pass vae an dunet to accelerate prepare
* bring back accelerator.accumulate
* scale latents
* use only one progress bar for steps
* push_to_hub at the end of training
* remove unused args
* log some important stats
* store args in tensorboard
* pretty comments
* save the trained embeddings
* mobe the script up
* add requirements file
* more cleanup
* fux typo
* begin readme
* style -> learnable_property
* keep vae and unet in eval mode
* address review comments
* address more comments
* removed unused args
* add train command in readme
* update readme
2022-09-02 14:23:52 +05:30
Suraj Patil
1b1d6444c6
[train_unconditional] fix gradient accumulation. ( #308 )
...
fix grad accum
2022-09-01 16:02:15 +02:00
Patrick von Platen
a4d5b59f13
Refactor Pipelines / Community pipelines and add better explanations. ( #257 )
...
* [Examples readme]
* Improve
* more
* save
* save
* save more
* up
* up
* Apply suggestions from code review
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
* make deterministic
* up
* better
* up
* add generator to img2img pipe
* save
* make pipelines deterministic
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* apply all changes
* more correctnios
* finish
* improve table
* more fixes
* up
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Update src/diffusers/pipelines/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* add better links
* fix more
* finish
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 18:43:42 +02:00
Anton Lozhkov
efa773afd2
Support K-LMS in img2img ( #270 )
...
* Support K-LMS in img2img
* Apply review suggestions
2022-08-29 17:17:05 +02:00
Pulkit Mishra
16172c1c7e
Adds missing torch imports to inpainting and image_to_image example ( #265 )
...
adds missing torch import to example
2022-08-29 10:56:37 +02:00
Evita
28f730520e
Fix typo in README.md ( #260 )
2022-08-26 18:54:45 -07:00
Suraj Patil
5cbed8e0d1
Fix inpainting script ( #258 )
...
* expand latents before the check, style
* update readme
2022-08-26 21:16:43 +05:30
Logan
bb4d605dfc
add inpainting example script ( #241 )
...
* add inpainting
* added proper noising of init_latent as reccommened by jackloomen (https://github.com/huggingface/diffusers/pull/241#issuecomment-1226283542 )
* move image preprocessing inside pipeline and allow non 512x512 mask
2022-08-26 20:32:46 +05:30
Pedro Cuenca
bfe37f3159
Reproducible images by supplying latents to pipeline ( #247 )
...
* Accept latents as input for StableDiffusionPipeline.
* Notebook to demonstrate reusable seeds (latents).
* More accurate type annotation
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Review comments: move to device, raise instead of assert.
* Actually commit the test notebook.
I had mistakenly pushed an empty file instead.
* Adapt notebook to Colab.
* Update examples readme.
* Move notebook to personal repo.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-25 19:17:05 +05:30
Suraj Patil
511bd3aaf2
[example/image2image] raise error if strength is not in desired range ( #238 )
...
raise error if strength is not in desired range
2022-08-23 19:52:52 +05:30
Suraj Patil
4674fdf807
Add image2image example script. ( #231 )
...
* boom boom
* reorganise examples
* add image2image in example inference
* add readme
* fix example
* update colab url
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix init_timestep
* update colab url
* update main readme
* rename readme
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-08-23 16:27:28 +05:30
Anton Lozhkov
eeb9264acd
Support training with a local image folder ( #152 )
...
* Support training with an image folder
* style
2022-08-03 15:25:00 +02:00
Anton Lozhkov
cde0ed162a
Add a step about accelerate config to the examples ( #130 )
2022-07-22 13:48:26 +02:00
John Haugeland
85244d4a59
Documentation cross-reference ( #127 )
...
In https://github.com/huggingface/diffusers/issues/124 I incorrectly suggested that the image set creation process was undocumented. In reality, I just hadn't located it. @patrickvonplaten did so for me.
This PR places a hotlink so that people like me can be shoehorned over where they needed to be.
2022-07-21 21:46:15 +02:00
Anton Lozhkov
a487b5095a
Update images
2022-07-21 17:11:36 +02:00
anton-l
a73ae3e5b0
Better default for AdamW
2022-07-21 13:36:16 +02:00
anton-l
06505ba4b4
Less eval steps during training
2022-07-21 11:47:40 +02:00
anton-l
302b86bd0b
Adapt training to the new UNet API
2022-07-21 11:07:21 +02:00
Anton Lozhkov
76f9b52289
Update the training examples ( #102 )
...
* New unet, gradient accumulation
* Save every n epochs
* Remove find_unused_params, hooray!
* Update examples
* Switch to DDPM completely
2022-07-20 19:51:23 +02:00
Anton Lozhkov
d9316bf8bc
Fix mutable proj_out weight in the Attention layer ( #73 )
...
* Catch unused params in DDP
* Fix proj_out, add test
2022-07-04 12:36:37 +02:00
Tanishq Abraham
3abf4bc439
EMA model stepping updated to keep track of current step ( #64 )
...
ema model stepping done automatically now
2022-07-04 11:53:15 +02:00
Anton Lozhkov
8cba133f36
Add the model card template ( #43 )
...
* add a metrics logger
* fix LatentDiffusionUncondPipeline
* add VQModel in init
* add image logging to tensorboard
* switch manual templates to the modelcards package
* hide ldm example
Co-authored-by: patil-suraj <surajp815@gmail.com>
2022-06-29 15:37:23 +02:00
Patrick von Platen
932ce05d97
cancel einops
2022-06-27 15:39:41 +00:00
anton-l
07ff0abff4
Glide and LDM training experiments
2022-06-27 17:25:59 +02:00
anton-l
1cf7933ea2
Framework-agnostic timestep broadcasting
2022-06-27 17:11:01 +02:00
anton-l
3f9e3d8ad6
add EMA during training
2022-06-27 15:23:01 +02:00
anton-l
c31736a4a4
Merge remote-tracking branch 'origin/main'
...
# Conflicts:
# src/diffusers/pipelines/pipeline_glide.py
2022-06-22 15:17:10 +02:00
anton-l
7b43035bcb
init text2im script
2022-06-22 15:15:54 +02:00
Anton Lozhkov
33abc79515
Update README.md
2022-06-22 13:52:45 +02:00
anton-l
848c86ca0a
batched forward diffusion step
2022-06-22 13:38:14 +02:00
anton-l
9e31c6a749
refactor GLIDE text2im pipeline, remove classifier_free_guidance
2022-06-21 14:07:58 +02:00
anton-l
71289ba06e
add lr schedule utils
2022-06-21 11:35:56 +02:00
anton-l
0417baf23d
additional hub arguments
2022-06-21 11:21:10 +02:00
anton-l
9c82c32ba7
make style
2022-06-21 10:43:40 +02:00
anton-l
a2117cb797
add push_to_hub
2022-06-21 10:38:34 +02:00
Manuel Romero
57aba1ef50
Fix output path name
2022-06-15 21:45:49 +02:00
Anton Lozhkov
1112699149
add a training examples doc
2022-06-15 16:51:37 +02:00
Patrick von Platen
fb9e37adf6
correct logging
2022-06-15 15:52:23 +02:00
anton-l
84bd65bced
Merge remote-tracking branch 'origin/main'
2022-06-15 14:37:04 +02:00
anton-l
0deeb06aac
better defaults
2022-06-15 14:36:43 +02:00
Patrick von Platen
17c574a16d
remove torchvision dependency
2022-06-15 12:35:47 +02:00
anton-l
cfe6eb1611
Training example parameterization
2022-06-15 11:21:02 +02:00
anton-l
7fe05bb311
Bugfixes for the training example
2022-06-14 18:25:22 +02:00
anton-l
bb30664285
Move the training example
2022-06-14 11:33:24 +02:00
Patrick von Platen
3b8f24525f
upload
2022-06-07 13:24:36 +00:00
Patrick von Platen
fe3137304b
improve
2022-06-06 17:03:41 +02:00
Patrick von Platen
3a5c65d568
finish
2022-06-03 19:11:58 +02:00
Patrick von Platen
417927f554
add some examples to seperate sampler and schedules
2022-06-03 19:02:36 +02:00