diff --git a/examples/text_to_image/README.md b/examples/text_to_image/README.md index 31b00e94..965e2f24 100644 --- a/examples/text_to_image/README.md +++ b/examples/text_to_image/README.md @@ -148,7 +148,7 @@ huggingface-cli login Now we can start training! ```bash -accelerate --mixed_precision="fp16" launch train_text_to_image_lora.py \ +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME --caption_column="text" \ --resolution=512 --random_flip \ @@ -157,7 +157,7 @@ accelerate --mixed_precision="fp16" launch train_text_to_image_lora.py \ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ --seed=42 \ --output_dir="sd-pokemon-model-lora" \ - --save_sample_prompt="cute dragon creature" --report_to="wandb" + --validation_prompt="cute dragon creature" --report_to="wandb" ``` The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.