Use `expand` instead of ones to broadcast tensor.
As suggested by @bes-dev. According the documentation this shouldn't
take any memory - it just plays with the strides.
* type hints: models/vae.py
* modify typings in vae.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* [Type hint] scheduling ddim
* apply suggestions from code review
apply suggestions to also return the return type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Type hint] PNDM Schedulers
* ran make style
* updated timesteps type hint
* apply suggestions from code review
* ran make style
* removed unused import
* Use ONNX / Core ML compatible method to broadcast.
Unfortunately `tile` could not be used either, it's still not compatible
with ONNX.
See #284.
* Add comment about why broadcast_to is not used.
Also, apply style to changed files.
* Make sure broadcast remains in same device.
* Fix tqdm and OOM
* tqdm auto
* tqdm is still spamming try to disable it altogether
* rather just set the pipe config, to keep the global tqdm clean
* style
* add textual inversion script
* make the loop work
* make coarse_loss optional
* save pipeline after training
* add arg pretrained_model_name_or_path
* fix saving
* fix gradient_accumulation_steps
* style
* fix progress bar steps
* scale lr
* add argument to accept style
* remove unused args
* scale lr using num gpus
* load tokenizer using args
* add checks when converting init token to id
* improve commnets and style
* document args
* more cleanup
* fix default adamw arsg
* TextualInversionWrapper -> CLIPTextualInversionWrapper
* fix tokenizer loading
* Use the CLIPTextModel instead of wrapper
* clean dataset
* remove commented code
* fix accessing grads for multi-gpu
* more cleanup
* fix saving on multi-GPU
* init_placeholder_token_embeds
* add seed
* fix flip
* fix multi-gpu
* add utility methods in wrapper
* remove ipynb
* don't use wrapper
* dont pass vae an dunet to accelerate prepare
* bring back accelerator.accumulate
* scale latents
* use only one progress bar for steps
* push_to_hub at the end of training
* remove unused args
* log some important stats
* store args in tensorboard
* pretty comments
* save the trained embeddings
* mobe the script up
* add requirements file
* more cleanup
* fux typo
* begin readme
* style -> learnable_property
* keep vae and unet in eval mode
* address review comments
* address more comments
* removed unused args
* add train command in readme
* update readme
* Changed variable name from "h" to "hidden_states"
Per issue #198 , changed variable name from "h" to "hidden_states" in the forward function only. I am happy to change any other variable names, please advise recommended new names.
* Update src/diffusers/models/resnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>