* beta never changes removed from state
* fix typos in docs
* removed unused var
* initial ddim flax scheduler
* import
* added dummy objects
* fix style
* fix typo
* docs
* fix typo in comment
* set return type
* added flax ddom
* fix style
* remake
* pass PRNG key as argument and split before use
* fix doc string
* use config
* added flax Karras VE scheduler
* make style
* fix dummy
* fix ndarray type annotation
* replace returns a new state
* added lms_discrete scheduler
* use self.config
* add_noise needs state
* use config
* use config
* docstring
* added flax score sde ve
* fix imports
* fix typos
* add textual inversion script
* make the loop work
* make coarse_loss optional
* save pipeline after training
* add arg pretrained_model_name_or_path
* fix saving
* fix gradient_accumulation_steps
* style
* fix progress bar steps
* scale lr
* add argument to accept style
* remove unused args
* scale lr using num gpus
* load tokenizer using args
* add checks when converting init token to id
* improve commnets and style
* document args
* more cleanup
* fix default adamw arsg
* TextualInversionWrapper -> CLIPTextualInversionWrapper
* fix tokenizer loading
* Use the CLIPTextModel instead of wrapper
* clean dataset
* remove commented code
* fix accessing grads for multi-gpu
* more cleanup
* fix saving on multi-GPU
* init_placeholder_token_embeds
* add seed
* fix flip
* fix multi-gpu
* add utility methods in wrapper
* remove ipynb
* don't use wrapper
* dont pass vae an dunet to accelerate prepare
* bring back accelerator.accumulate
* scale latents
* use only one progress bar for steps
* push_to_hub at the end of training
* remove unused args
* log some important stats
* store args in tensorboard
* pretty comments
* save the trained embeddings
* mobe the script up
* add requirements file
* more cleanup
* fux typo
* begin readme
* style -> learnable_property
* keep vae and unet in eval mode
* address review comments
* address more comments
* removed unused args
* add train command in readme
* update readme