Update README.md
This commit is contained in:
parent
b6996e55d7
commit
85345e81be
|
@ -11,7 +11,13 @@ The implementation makes minimum changes over the official codebase of Textual I
|
|||
### Preparation
|
||||
To fine-tune a stable diffusion model, you need to obtain the pre-trained stable diffusion models following their [instructions](https://github.com/CompVis/stable-diffusion#stable-diffusion-v1). Weights can be downloads on [HuggingFace](https://huggingface.co/CompVis). You can decide which version of checkpoint to use, but I use ```sd-v1-4-full-ema.ckpt```.
|
||||
|
||||
We also need to create a set of images for regularization, as the fine-tuning algorithm of Dreambooth requires that. Details of the algorithm can be found in the paper. The text prompt can be ```a class```, where ```class``` is a word that describes the class of your object, such as ```dog```. I generate 8 images for regularization. Save the generated images (separately, one image per ```.png``` file) at ```/root/to/regularization/images```.
|
||||
We also need to create a set of images for regularization, as the fine-tuning algorithm of Dreambooth requires that. Details of the algorithm can be found in the paper. The text prompt can be ```phito of a xxx```, where ```xxx``` is a word that describes the class of your object, such as ```dog```. The command is
|
||||
|
||||
```
|
||||
python scripts/stable_txt2img.py --ddim_eta 0.0 --n_samples 8 --n_iter 1 --scale 10.0 --ddim_steps 50 --ckpt /path/to/original/stable-diffusion/sd-v1-4-full-ema.ckpt --prompt "a photo of a xxx"
|
||||
```
|
||||
|
||||
I generate 8 images for regularization. After that, save the generated images (separately, one image per ```.png``` file) at ```/root/to/regularization/images```.
|
||||
|
||||
### Training
|
||||
|
||||
|
|
Loading…
Reference in New Issue