Update TWEAKING.md
This commit is contained in:
parent
678638504d
commit
2407fef975
|
@ -16,22 +16,6 @@ You may wish to consider adding "sd1" or "sd2v" or similar to remember what the
|
|||
|
||||
--project_name "jets_sd21768v" ^
|
||||
|
||||
|
||||
## __Stuff you probably want on__
|
||||
|
||||
### amp
|
||||
--amp
|
||||
|
||||
Enables automatic mixed precision. Greatly improved training speed and will reduce VRAM use. [Torch](https://pytorch.org/docs/stable/amp.html) will automatically use FP16 precision for specific model components where FP16 is sufficient precision, and FP32 otherwise. This also enables xformers to work with the SD1.x attention head schema, which is a large speed boost for SD1.x training. I highly suggest you always use this, but it is left as an option if you wish to disable.
|
||||
|
||||
When amp is used with [gradient checkpointing](#gradient_checkpointing) you can run the trainer on 12GB GPUs and potentially 11GB.
|
||||
|
||||
### useadam8bit
|
||||
|
||||
--useadam8bit
|
||||
|
||||
Uses [Tim Dettmer's reduced precision AdamW 8 Bit optimizer](https://github.com/TimDettmers/bitsandbytes). This seems to have no noticeable impact on quality but is considerable faster and more VRAM efficient. See more below in AdamW vs AdamW 8bit.
|
||||
|
||||
## __Epochs__
|
||||
|
||||
EveryDream 2.0 has done away with repeats and instead you should set your max_epochs. Changing epochs has the same effect as changing repeats in DreamBooth or EveryDream1. For example, if you had 50 repeats and 5 epochs, you would now set max_epochs to 250 (50x5=250). This is a bit more intuitive as there is no more double meaning for epochs and repeats.
|
||||
|
|
Loading…
Reference in New Issue