* Unify offset configuration in DDIM and PNDM schedulers
* Format
Add missing variables
* Fix pipeline test
* Update src/diffusers/schedulers/scheduling_ddim.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Default set_alpha_to_one to false
* Format
* Add tests
* Format
* add deprecation warning
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix typos
* Add a typo check action
* Fix a bug
* Changed to manual typo check currently
Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Removed a confusing message
* Renamed "nin_shortcut" to "in_shortcut"
* Add memo about NIN
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Fix is_onnx_available
Fix: If user install onnxruntime-gpu, is_onnx_available() will return False.
* add more onnxruntime candidates
* Run `make style`
Co-authored-by: anton-l <anton@huggingface.co>
* begin text2img conversion script
* add fn to convert config
* create config if not provided
* update imports and use UNet2DConditionModel
* fix imports, layer names
* fix unet coversion
* add function to convert VAE
* fix vae conversion
* update main
* create text model
* update config creating logic for unet
* fix config creation
* update script to create and save pipeline
* remove unused imports
* fix checkpoint loading
* better name
* save progress
* finish
* up
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* First UNet Flax modeling blocks.
Mimic the structure of the PyTorch files.
The model classes themselves need work, depending on what we do about
configuration and initialization.
* Remove FlaxUNet2DConfig class.
* ignore_for_config non-config args.
* Implement `FlaxModelMixin`
* Use new mixins for Flax UNet.
For some reason the configuration is not correctly applied; the
signature of the `__init__` method does not contain all the parameters
by the time it's inspected in `extract_init_dict`.
* Import `FlaxUNet2DConditionModel` if flax is available.
* Rm unused method `framework`
* Update src/diffusers/modeling_flax_utils.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Indicate types in flax.struct.dataclass as pointed out by @mishig25
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
* Fix typo in transformer block.
* make style
* some more changes
* make style
* Add comment
* Update src/diffusers/modeling_flax_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Rm unneeded comment
* Update docstrings
* correct ignore kwargs
* make style
* Update docstring examples
* Make style
* Style: remove empty line.
* Apply style (after upgrading black from pinned version)
* Remove some commented code and unused imports.
* Add init_weights (not yet in use until #513).
* Trickle down deterministic to blocks.
* Rename q, k, v according to the latest PyTorch version.
Note that weights were exported with the old names, so we need to be
careful.
* Flax UNet docstrings, default props as in PyTorch.
* Fix minor typos in PyTorch docstrings.
* Use FlaxUNet2DConditionOutput as output from UNet.
* make style
Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* beta never changes removed from state
* fix typos in docs
* removed unused var
* initial ddim flax scheduler
* import
* added dummy objects
* fix style
* fix typo
* docs
* fix typo in comment
* set return type
* added flax ddom
* fix style
* remake
* pass PRNG key as argument and split before use
* fix doc string
* use config
* added flax Karras VE scheduler
* make style
* fix dummy
* fix ndarray type annotation
* replace returns a new state
* added lms_discrete scheduler
* use self.config
* add_noise needs state
* use config
* use config
* docstring
* added flax score sde ve
* fix imports
* fix typos
* add different method for sliced attention
* Update src/diffusers/models/attention.py
* Apply suggestions from code review
* Update src/diffusers/models/attention.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* initial attempt at solving
* fix pndm power of 3 inference_step
* add power of 3 test
* fix index in pndm test, remove ddim test
* add comments, change to round()
* update expected results of slow tests
* relax sum and mean tests
* Print shapes when reporting exception
* formatting
* fix sentence
* relax test_stable_diffusion_fast_ddim for gpu fp16
* relax flakey tests on GPU
* added comment on large tolerences
* black
* format
* set scheduler seed
* added generator
* use np.isclose
* set num_inference_steps to 50
* fix dep. warning
* update expected_slice
* preprocess if image
* updated expected results
* updated expected from CI
* pass generator to VAE
* undo change back to orig
* use orignal
* revert back the expected on cpu
* revert back values for CPU
* more undo
* update result after using gen
* update mean
* set generator for mps
* update expected on CI server
* undo
* use new seed every time
* cpu manual seed
* reduce num_inference_steps
* style
* use generator for randn
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* renamed variable names
q -> query
k -> key
v -> value
b -> batch
c -> channel
h -> height
w -> weight
* rename variable names
missed some in the initial commit
* renamed more variable names
As per code review suggestions, renamed x -> hidden_states and x_in -> residual
* fixed minor typo