docs: document --torch_device option
This commit is contained in:
parent
da15f0a745
commit
de03c18b0c
|
@ -2,7 +2,7 @@
|
|||
|
||||
Automatic captioning uses Salesforce's BLIP to automatically create a clean sentence structure for captioning input images before training.
|
||||
|
||||
This requires an Nvidia GPU, but is not terribly intensive work. It should run fine on something like a 1050 Ti 4GB.
|
||||
By default this requires an Nvidia GPU, but is not terribly intensive work. It should run fine on something like a 1050 Ti 4GB. You can even run this on the CPU by specifying `--torch_device cpu` as an argument. This will be slower than running on a Nvidia GPU, but will work even on Apple Silicon Macs.
|
||||
|
||||
[EveryDream trainer](https://github.com/victorchall/EveryDream-trainer) no longer requires cropped images. You only need to crop to exclude stuff you don't want trained, or to improve the portion of face close ups in your data. The EveryDream trainer now accepts multiple aspect ratios and can train on them natively.
|
||||
|
||||
|
|
Loading…
Reference in New Issue