diffusers/examples
Suraj Patil d0d3e24ec1
Textual inversion (#266)
* add textual inversion script

* make the loop work

* make coarse_loss optional

* save pipeline after training

* add arg pretrained_model_name_or_path

* fix saving

* fix gradient_accumulation_steps

* style

* fix progress bar steps

* scale lr

* add argument to accept style

* remove unused args

* scale lr using num gpus

* load tokenizer using args

* add checks when converting init token to id

* improve commnets and style

* document args

* more cleanup

* fix default adamw arsg

* TextualInversionWrapper -> CLIPTextualInversionWrapper

* fix tokenizer loading

* Use the CLIPTextModel instead of wrapper

* clean dataset

* remove commented code

* fix accessing grads for multi-gpu

* more cleanup

* fix saving on multi-GPU

* init_placeholder_token_embeds

* add seed

* fix flip

* fix multi-gpu

* add utility methods in wrapper

* remove ipynb

* don't use wrapper

* dont pass vae an dunet to accelerate prepare

* bring back accelerator.accumulate

* scale latents

* use only one progress bar for steps

* push_to_hub at the end of training

* remove unused args

* log some important stats

* store args in tensorboard

* pretty comments

* save the trained embeddings

* mobe the script up

* add requirements file

* more cleanup

* fux typo

* begin readme

* style -> learnable_property

* keep vae and unet in eval mode

* address review comments

* address more comments

* removed unused args

* add train command in readme

* update readme
2022-09-02 14:23:52 +05:30
..
community Refactor Pipelines / Community pipelines and add better explanations. (#257) 2022-08-30 18:43:42 +02:00
inference Refactor Pipelines / Community pipelines and add better explanations. (#257) 2022-08-30 18:43:42 +02:00
textual_inversion Textual inversion (#266) 2022-09-02 14:23:52 +05:30
unconditional_image_generation [train_unconditional] fix gradient accumulation. (#308) 2022-09-01 16:02:15 +02:00
README.md Refactor Pipelines / Community pipelines and add better explanations. (#257) 2022-08-30 18:43:42 +02:00

README.md

🧨 Diffusers Examples

Diffusers examples are a collection of scripts to demonstrate how to effectively use the diffusers library for a variety of use cases.

Note: If you are looking for official examples on how to use diffusers for inference, please have a look at src/diffusers/pipelines

Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. More specifically, this means:

  • Self-contained: An example script shall only depend on "pip-install-able" Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. requirements.txt and execute the example script.
  • Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required.
  • Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners.
  • One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible.

We provide official examples that cover the most popular tasks of diffusion models. Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above. If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you!

Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support:

Task 🤗 Accelerate 🤗 Datasets Colab
Unconditional Image Generation Open In Colab

Community

In addition, we provide community examples, which are examples added and maintained by our community. Community examples can consist of both training examples or inference pipelines. For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue. Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines. Note: Community examples can be a great first contribution to show to the community how you like to use diffusers 🪄.

Important note

To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .

Then cd in the example folder of your choice and run

pip install -r requirements.txt