6b185b6acd
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by: Anton Lozhkov <anton@huggingface.co> * Accept suggestion Co-authored-by: Anton Lozhkov <anton@huggingface.co> Co-authored-by: Anton Lozhkov <anton@huggingface.co> |
||
---|---|---|
.. | ||
README.md | ||
requirements.txt | ||
requirements_flax.txt | ||
train_text_to_image.py | ||
train_text_to_image_flax.py |
README.md
Stable Diffusion text-to-image fine-tuning
The train_text_to_image.py
script shows how to fine-tune stable diffusion model on your own dataset.
Note:
This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.
Running locally with PyTorch
Installing the dependencies
Before running the scripts, make sure to install the library's training dependencies:
pip install git+https://github.com/huggingface/diffusers.git
pip install -U -r requirements.txt
And initialize an 🤗Accelerate environment with:
accelerate config
Pokemon example
You need to accept the model license before downloading or using the weights. In this example we'll use model version v1-4
, so you'll need to visit its card, read the license and tick the checkbox if you agree.
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation.
Run the following command to authenticate your token
huggingface-cli login
If you have already cloned the repo, then you won't need to go through these steps.
Hardware
With gradient_checkpointing
and mixed_precision
it should be possible to fine tune the model on a single 24GB GPU. For higher batch_size
and faster training it's better to use GPUs with >30GB memory.
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export dataset_name="lambdalabs/pokemon-blip-captions"
accelerate launch train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
To run on your own training files prepare the dataset according to the format required by datasets
, you can find the instructions for how to do that in this document.
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export TRAIN_DIR="path_to_your_dataset"
accelerate launch train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
Once the training is finished the model will be saved in the output_dir
specified in the command. In this example it's sd-pokemon-model
. To load the fine-tuned model for inference just pass that path to StableDiffusionPipeline
from diffusers import StableDiffusionPipeline
model_path = "path_to_saved_model"
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt="yoda").images[0]
image.save("yoda-pokemon.png")
Training with Flax/JAX
For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
_Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.
Before running the scripts, make sure to install the library's training dependencies:
pip install -U -r requirements_flax.txt
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export dataset_name="lambdalabs/pokemon-blip-captions"
python train_text_to_image_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--output_dir="sd-pokemon-model"
To run on your own training files prepare the dataset according to the format required by datasets
, you can find the instructions for how to do that in this document.
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export TRAIN_DIR="path_to_your_dataset"
python train_text_to_image_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--mixed_precision="fp16" \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--output_dir="sd-pokemon-model"