* Modify UNet2DConditionModel
- allow skipping mid_block
- adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
- allow user to set dimension for the timestep embedding (`time_embed_dim`)
- the kernel_size for `conv_in` and `conv_out` is now configurable
- add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
- allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
- added 2 arguments `attn1_types` and `attn2_types`
* currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
`BasicTransformerBlock` block with 2 cross-attention , otherwise we
get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept
the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
- the position of downsample layer and upsample layer is now configurable
- in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
this use case
- if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
inside cross attention block
add up/down blocks for k-upscaler
modify CrossAttention class
- make the `dropout` layer in `to_out` optional
- `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
- `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
- `attention_dropout`: add an optional dropout on attention score
adapt BasicTransformerBlock
- add an ada groupnorm layer to conditioning attention input with timestep embedding
- allow skipping the FeedForward layer in between the attentions
- replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
update timestep embedding: add new act_fn gelu and an optional act_2
modified ResnetBlock2D
- refactored with AdaGroupNorm class (the timestep scale shift normalization)
- add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
- add option to use input AdaGroupNorm on the input instead of groupnorm
- add options to add a dropout layer after each conv
- allow user to set the bias in conv_shortcut (needed for k-upscaler)
- add gelu
adding conversion script for k-upscaler unet
add pipeline
* fix attention mask
* fix a typo
* fix a bug
* make sure model can be used with GPU
* make pipeline work with fp16
* fix an error in BasicTransfomerBlock
* make style
* fix typo
* some more fixes
* uP
* up
* correct more
* some clean-up
* clean time proj
* up
* uP
* more changes
* remove the upcast_attention=True from unet config
* remove attn1_types, attn2_types etc
* fix
* revert incorrect changes up/down samplers
* make style
* remove outdated files
* Apply suggestions from code review
* attention refactor
* refactor cross attention
* Apply suggestions from code review
* update
* up
* update
* Apply suggestions from code review
* finish
* Update src/diffusers/models/cross_attention.py
* more fixes
* up
* up
* up
* finish
* more corrections of conversion state
* act_2 -> act_2_fn
* remove dropout_after_conv from ResnetBlock2D
* make style
* simplify KAttentionBlock
* add fast test for latent upscaler pipeline
* add slow test
* slow test fp16
* make style
* add doc string for pipeline_stable_diffusion_latent_upscale
* add api doc page for latent upscaler pipeline
* deprecate attention mask
* clean up embeddings
* simplify resnet
* up
* clean up resnet
* up
* correct more
* up
* up
* improve a bit more
* correct more
* more clean-ups
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add docstrings for new unet config
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* # Copied from
* encode the image if not latent
* remove force casting vae to fp32
* fix
* add comments about preconditioning parameters from k-diffusion paper
* attn1_type, attn2_type -> add_self_attention
* clean up get_down_block and get_up_block
* fix
* fixed a typo(?) in ada group norm
* update slice attention processer for cross attention
* update slice
* fix fast test
* update the checkpoint
* finish tests
* fix-copies
* fix-copy for modeling_text_unet.py
* make style
* make style
* fix f-string
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix import
* correct changes
* fix resnet
* make fix-copies
* correct euler scheduler
* add missing #copied from for preprocess
* revert
* fix
* fix copies
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/cross_attention.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* clean up conversion script
* KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
* more
* Update src/diffusers/models/unet_2d_condition.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove prepare_extra_step_kwargs
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix a typo in timestep embedding
* remove num_image_per_prompt
* fix fasttest
* make style + fix-copies
* fix
* fix xformer test
* fix style
* doc string
* make style
* fix-copies
* docstring for time_embedding_norm
* make style
* final finishes
* make fix-copies
* fix tests
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* make tests deterministic
* run slow tests
* prepare for testing
* finish
* refactor
* add print statements
* finish more
* correct some test failures
* more fixes
* set up to correct tests
* more corrections
* up
* fix more
* more prints
* add
* up
* up
* up
* uP
* uP
* more fixes
* uP
* up
* up
* up
* up
* fix more
* up
* up
* clean tests
* up
* up
* up
* more fixes
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* make
* correct
* finish
* finish
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* add text embeds to sd
* add text embeds to sd
* finish tests
* finish
* finish
* make style
* fix tests
* make style
* make style
* up
* better docs
* fix
* fix
* new try
* up
* up
* finish
* added dit model
* import
* initial pipeline
* initial convert script
* initial pipeline
* make style
* raise valueerror
* single function
* rename classes
* use DDIMScheduler
* timesteps embedder
* samples to cpu
* fix var names
* fix numpy type
* use timesteps class for proj
* fix typo
* fix arg name
* flip_sin_to_cos and better var names
* fix C shape cal
* make style
* remove unused imports
* cleanup
* add back patch_size
* initial dit doc
* typo
* Update docs/source/api/pipelines/dit.mdx
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* added copyright license headers
* added example usage and toc
* fix variable names asserts
* remove comment
* added docs
* fix typo
* upstream changes
* set proper device for drop_ids
* added initial dit pipeline test
* update docs
* fix imports
* make fix-copies
* isort
* fix imports
* get rid of more magic numbers
* fix code when guidance is off
* remove block_kwargs
* cleanup script
* removed to_2tuple
* use FeedForward class instead of another MLP
* style
* work on mergint DiTBlock with BasicTransformerBlock
* added missing final_dropout and args to BasicTransformerBlock
* use norm from block
* fix arg
* remove unused arg
* fix call to class_embedder
* use timesteps
* make style
* attn_output gets multiplied
* removed commented code
* use Transformer2D
* use self.is_input_patches
* fix flags
* fixed conversion to use Transformer2DModel
* fixes for pipeline
* remove dit.py
* fix timesteps device
* use randn_tensor and fix fp16 inf.
* timesteps_emb already the right dtype
* fix dit test class
* fix test and style
* fix norm2 usage in vq-diffusion
* added author names to pipeline and lmagenet labels link
* fix tests
* use norm_type as string
* rename dit to transformer
* fix name
* fix test
* set norm_type = "layer" by default
* fix tests
* do not skip common tests
* Update src/diffusers/models/attention.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* revert AdaLayerNorm API
* fix norm_type name
* make sure all components are in eval mode
* revert norm2 API
* compact
* finish deprecation
* add slow tests
* remove @
* refactor some stuff
* upload
* Update src/diffusers/pipelines/dit/pipeline_dit.py
* finish more
* finish docs
* improve docs
* finish docs
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
re: https://github.com/huggingface/diffusers/issues/1857
We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).
- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32
* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32
* [Img2Img] fix AltDiffusion Img2Img resolution test
* [Img2Img] add Stable Diffusion Img2Img resolution test
* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32
* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32
* [SD Depth2Img] round resolution to multiplies of 8 instead of 32
* [Repaint] round resolution to multiplies of 8 instead of 32
* fix make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Repro] Correct reproducability
* up
* up
* uP
* up
* need better image
* allow conversion from no state dict checkpoints
* up
* up
* up
* up
* check tensors
* check tensors
* check tensors
* check tensors
* next try
* up
* up
* better name
* up
* up
* Apply suggestions from code review
* correct more
* up
* replace all torch randn
* fix
* correct
* correct
* finish
* fix more
* up
* [Deterministic torch randn] Allow tensors to be generated on CPU
* fix more
* up
* fix more
* up
* Update src/diffusers/utils/torch_utils.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Apply suggestions from code review
* up
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [Unclip] Make sure latents can be reused
* allow one to directly pass embeddings
* up
* make unclip for text work
* finish allowing to pass embeddings
* correct more
* make style
* move files a bit
* more refactors
* fix more
* more fixes
* fix more onnx
* make style
* upload
* fix
* up
* fix more
* up again
* up
* small fix
* Update src/diffusers/__init__.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Make safety_checker optional in more pipelines.
* Remove inappropriate comment in inpaint pipeline.
* InPaint Test: set feature_extractor to None.
* Remove import
* img2img test: set feature_extractor to None.
* inpaint sd2 test: set feature_extractor to None.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* [SD] Make sure batched input works correctly
* uP
* uP
* up
* up
* uP
* up
* fix mask stuff
* up
* uP
* more up
* up
* uP
* up
* finish
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add paint by example
* mkae loading possibel
* up
* Update src/diffusers/models/attention.py
* up
* finalize weight structure
* make example work
* make it work
* up
* up
* fix
* del
* add
* update
* Apply suggestions from code review
* correct transformer 2d
* finish
* up
* up
* up
* up
* fix
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Apply suggestions from code review
* up
* finish
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add AudioDiffusionPipeline and LatentAudioDiffusionPipeline
* add docs to toc
* fix tests
* fix tests
* fix tests
* fix tests
* fix tests
* Update pr_tests.yml
Fix tests
* parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041721 +0000
parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
author teticio <teticio@gmail.com> 1668765652 +0000
committer teticio <teticio@gmail.com> 1669041704 +0000
add colab notebook
[Flax] Fix loading scheduler from subfolder (#1319)
[FLAX] Fix loading scheduler from subfolder
Fix/Enable all schedulers for in-painting (#1331)
* inpaint fix k lms
* onnox as well
* up
Correct path to schedlure (#1322)
* [Examples] Correct path
* uP
Avoid nested fix-copies (#1332)
* Avoid nested `# Copied from` statements during `make fix-copies`
* style
Fix img2img speed with LMS-Discrete Scheduler (#896)
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the `integrate.quad` call later on- by long I mean more than 10x slower.
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Fix the order of casts for onnx inpainting (#1338)
Legacy Inpainting Pipeline for Onnx Models (#1237)
* Add legacy inpainting pipeline compatibility for onnx
* remove commented out line
* Add onnx legacy inpainting test
* Fix slow decorators
* pep8 styling
* isort styling
* dummy object
* ordering consistency
* style
* docstring styles
* Refactor common prompt encoding pattern
* Update tests to permanent repository home
* support all available schedulers until ONNX IO binding is available
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* updated styling from PR suggested feedback
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Jax infer support negative prompt (#1337)
* support negative prompts in sd jax pipeline
* pass batched neg_prompt
* only encode when negative prompt is None
Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
Minor change to Imagic Readme
Missing dir causes an error when running the example code.
make style
change the sample model (#1352)
* Update alt_diffusion.mdx
* Update alt_diffusion.mdx
Add bit diffusion [WIP] (#971)
* Create bit_diffusion.py
Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
* adding bit diffusion to new branch
ran tests
* tests
* tests
* tests
* tests
* removed test folders + added to README
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move Mel to module in pipeline construction, make librosa optional
* fix imports
* fix copy & paste error in comment
* fix style
* add missing register_to_config
* fix class docstrings
* fix class docstrings
* tweak docstrings
* tweak docstrings
* update slow test
* put trailing commas back
* respect alphabetical order
* remove LatentAudioDiffusion, make vqvae optional
* move Mel from models back to pipelines :-)
* allow loading of pretrained audiodiffusion models
* fix tests
* fix dummies
* remove reference to latent_audio_diffusion in docs
* unused import
* inherit from SchedulerMixin to make loadable
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* feat: switch core pipelines to use image arg
* test: update tests for core pipelines
* feat: switch examples to use image arg
* docs: update docs to use image arg
* style: format code using black and doc-builder
* fix: deprecate use of init_image in all pipelines
* Flax: start adapting to Stable Diffusion 2
* More changes.
* attention_head_dim can be a tuple.
* Fix typos
* Add simple SD 2 integration test.
Slice values taken from my Ampere GPU.
* Add simple UNet integration tests for Flax.
Note that the expected values are taken from the PyTorch results. This
ensures the Flax and PyTorch versions are not too far off.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Typos and style
* Tests: verify jax is available.
* Style
* Make flake happy
* Remove typo.
* Simple Flax SD 2 pipeline tests.
* Import order
* Remove unused import.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: @camenduru
* Add heun
* Finish first version of heun
* remove bogus
* finish
* finish
* improve
* up
* up
* fix more
* change progress bar
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* finish
* up
* up
* up
* StableDiffusionUpscalePipeline
* fix a few things
* make it better
* fix image batching
* run vae in fp32
* fix docstr
* resize to mul of 64
* doc
* remove safety_checker
* add max_noise_level
* fix Copied
* begin tests
* slow tests
* default max_noise_level
* remove kwargs
* doc
* fix
* fix fast tests
* fix fast tests
* no sf
* don't offload vae
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adapt ddpm, ddpmsolver to prediction_type.
* Deprecate predict_epsilon in __init__.
* Bring FlaxDDIMScheduler up to date with DDIMScheduler.
* Set prediction_type as an ivar for consistency.
* Convert pipeline_ddpm
* Adapt tests.
* Adapt unconditional training script.
* Adapt BitDiffusion example.
* Add missing kwargs in dpmsolver_multistep
* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.
* make style
* Remove import no longer in use.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use config.prediction_type everywhere
* Add a couple of Flax prediction type tests.
* make style
* fix register deprecated arg
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* up
* convert dual unet
* revert dual attn
* adapt for vd-official
* test the full pipeline
* mixed inference
* mixed inference for text2img
* add image prompting
* fix clip norm
* split text2img and img2img
* fix format
* refactor text2img
* mega pipeline
* add optimus
* refactor image var
* wip text_unet
* text unet end to end
* update tests
* reshape
* fix image to text
* add some first docs
* dual guided pipeline
* fix token ratio
* propose change
* dual transformer as a native module
* DualTransformer(nn.Module)
* DualTransformer(nn.Module)
* correct unconditional image
* save-load with mega pipeline
* remove image to text
* up
* uP
* fix
* up
* final fix
* remove_unused_weights
* test updates
* save progress
* uP
* fix dual prompts
* some fixes
* finish
* style
* finish renaming
* up
* fix
* fix
* fix
* finish
Co-authored-by: anton-l <anton@huggingface.co>
* make sure fp16 runs well
* add fp16 test for superes
* Update src/diffusers/models/unet_2d.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* gen on cuda
* always run fast inferecne test on cpu
* run on cpu
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Handle batches and Tensors in `prepare_mask_and_masked_image`
* `blackfy`
upgrade `black`
* handle mask as `np.array`
* add docstring
* revert `black` changes with smaller line length
* missing ValueError in docstring
* raise `TypeError` for image as tensor but not mask
* typo in mask shape selection
* check for batch dim
* fix: wrong indentation
* add tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add legacy inpainting pipeline compatibility for onnx
* remove commented out line
* Add onnx legacy inpainting test
* Fix slow decorators
* pep8 styling
* isort styling
* dummy object
* ordering consistency
* style
* docstring styles
* Refactor common prompt encoding pattern
* Update tests to permanent repository home
* support all available schedulers until ONNX IO binding is available
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* updated styling from PR suggested feedback
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* being tests
* fix model ids
* don't use safety checker in tests
* add im2img2 tests
* fix integration tests
* integration tests
* style
* add sentencepiece in test dep
* quality
* 4 decimalk points
* fix im2img test
* increase the tok slightly
* add conversion script for vae
* uP
* uP
* more changes
* push
* up
* finish again
* up
* up
* up
* up
* finish
* up
* uP
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* up
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* re-add RL model code
* match model forward api
* add register_to_config, pass training tests
* fix tests, update forward outputs
* remove unused code, some comments
* add to docs
* remove extra embedding code
* unify time embedding
* remove conv1d output sequential
* remove sequential from conv1dblock
* style and deleting duplicated code
* clean files
* remove unused variables
* clean variables
* add 1d resnet block structure for downsample
* rename as unet1d
* fix renaming
* rename files
* add get_block(...) api
* unify args for model1d like model2d
* minor cleaning
* fix docs
* improve 1d resnet blocks
* fix tests, remove permuts
* fix style
* add output activation
* rename flax blocks file
* Add Value Function and corresponding example script to Diffuser implementation (#884)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* update post merge of scripts
* add mdiblock / outblock architecture
* Pipeline cleanup (#947)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
* clean up comments
* convert older script to using pipeline and add readme
* rename scripts
* style, update tests
* delete unet rl model file
* remove imports in src
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* Update src/diffusers/models/unet_1d_blocks.py
* Update tests/test_models_unet.py
* RL Cleanup v2 (#965)
* valuefunction code
* start example scripts
* missing imports
* bug fixes and placeholder example script
* add value function scheduler
* load value function from hub and get best actions in example
* very close to working example
* larger batch size for planning
* more tests
* merge unet1d changes
* wandb for debugging, use newer models
* success!
* turns out we just need more diffusion steps
* run on modal
* merge and code cleanup
* use same api for rl model
* fix variance type
* wrong normalization function
* add tests
* style
* style and quality
* edits based on comments
* style and quality
* remove unused var
* hack unet1d into a value function
* add pipeline
* fix arg order
* add pipeline to core library
* community pipeline
* fix couple shape bugs
* style
* Apply suggestions from code review
* clean up comments
* convert older script to using pipeline and add readme
* rename scripts
* style, update tests
* delete unet rl model file
* remove imports in src
* add specific vf block and update tests
* style
* Update tests/test_models_unet.py
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
* fix quality in tests
* fix quality style, split test file
* fix checks / tests
* make timesteps closer to main
* unify block API
* unify forward api
* delete lines in examples
* style
* examples style
* all tests pass
* make style
* make dance_diff test pass
* Refactoring RL PR (#1200)
* init file changes
* add import utils
* finish cleaning files, imports
* remove import flags
* clean examples
* fix imports, tests for merge
* update readmes
* hotfix for tests
* quality
* fix some tests
* change defaults
* more mps test fixes
* unet1d defaults
* do not default import experimental
* defaults for tests
* fix tests
* fix-copies
* fix
* changes per Patrik's comments (#1285)
* changes per Patrik's comments
* update conversion script
* fix renaming
* skip more mps tests
* last test fix
* Update examples/rl/README.md
Co-authored-by: Ben Glickenhaus <benglickenhaus@gmail.com>
* Match the generator device to the pipeline for DDPM and DDIM
* style
* fix
* update values
* fix fast tests
* trigger slow tests
* deprecate
* last value fixes
* mps fixes