Merge branch 'main' of https://github.com/victorchall/EveryDream2trainer into main
This commit is contained in:
commit
7349b57c0a
|
@ -231,7 +231,7 @@ class DataLoaderMultiAspect():
|
|||
current = os.path.join(recurse_root, f)
|
||||
|
||||
if os.path.isfile(current):
|
||||
ext = os.path.splitext(f)[1]
|
||||
ext = os.path.splitext(f)[1].lower()
|
||||
if ext in ['.jpg', '.jpeg', '.png', '.bmp', '.webp', '.jfif']:
|
||||
# add image multiplyrepeats number of times
|
||||
for _ in range(multiply):
|
||||
|
|
|
@ -30,7 +30,7 @@ Remember to use the same folder when you launch tensorboard (```tensorboard --lo
|
|||
|
||||
By default the CKPT format copies of ckpts that are peroidically saved are saved in the trainer root folder. If you want to save them elsewhere, use this:
|
||||
|
||||
--ckpt_dir "r:\webui\models\stable-diffusion"
|
||||
--save_ckpt_dir "r:\webui\models\stable-diffusion"
|
||||
|
||||
This is useful if you want to dump the CKPT files directly to your webui/inference program model folder so you don't have to manually cut and paste it over.
|
||||
|
||||
|
|
|
@ -18,6 +18,8 @@ You can edit the example `train.json` file to your liking, then run the followin
|
|||
|
||||
Be careful with editing the json file, as any syntax errors will cause the program to crash. You might want to use a json validator to check your file before running it. You can use an online validator such as https://jsonlint.com/ or look at it in VS Code.
|
||||
|
||||
One particular note is if your path to `data_root` or `resume_ckpt` has backslashes they need to use double \\\ or single /. There is an example train.json in the repo root.
|
||||
|
||||
## Running from the command line with arguments
|
||||
|
||||
I recommend you copy one of the examples below and keep it in a text file for future reference. Your settings are logged in the logs folder, but you'll need to make a command to start training.
|
||||
|
@ -80,4 +82,4 @@ Or use relative pathing:
|
|||
|
||||
```--resume_ckpt "logs\myproj20221213-161620\ckpts\myproj-ep22-gs01099" ^```
|
||||
|
||||
You should point to the folder in the logs per above if you want to resume rather than running a conversion back on a 2.0GB or 2.5GB pruned file if possible.
|
||||
You should point to the folder in the logs per above if you want to resume rather than running a conversion back on a 2.0GB or 2.5GB pruned file if possible.
|
||||
|
|
|
@ -4,7 +4,7 @@ This document should be read by all users who are trying to get the best results
|
|||
|
||||
## Logging
|
||||
|
||||
Make sure you pay attention to your logs and sample images. Launch tensorboard in a second command line. See (logging)[doc/LOGGING.md] for more info.
|
||||
Make sure you pay attention to your logs and sample images. Launch tensorboard in a second command line. See [logging](LOGGING.md) for more info.
|
||||
|
||||
tensorboard --logdir logs
|
||||
|
||||
|
@ -48,7 +48,7 @@ If you are training a huge dataset (20k+) then saving every 1 epoch may not be v
|
|||
|
||||
*A "last" checkpoint is always saved at the end of training.*
|
||||
|
||||
Diffusers copies of checkpoints are saved in your /logs/[project_name]/ckpts folder, and can be used to continue training if you want to pick up where you left off. CKPT files are saved in the root training folder by default. These folders can be changed. See [Advanced Tweaking](doc/ATWEAKING.md) for more info.
|
||||
Diffusers copies of checkpoints are saved in your /logs/[project_name]/ckpts folder, and can be used to continue training if you want to pick up where you left off. CKPT files are saved in the root training folder by default. These folders can be changed. See [Advanced Tweaking](ATWEAKING.md) for more info.
|
||||
|
||||
## Resuming training from previous runs
|
||||
|
||||
|
@ -58,7 +58,7 @@ If you want to resume training from a previous run, you can do so by pointing to
|
|||
|
||||
## Learning Rate
|
||||
|
||||
The learning rate affects how much "training" is done on the model per training step. It is a very careful balance to select a value that will learn your data. See [Advanced Tweaking](doc/ATWEAKING.md) for more info. Once you have started, the learning rate is a good first knob to turn as you move into more advanced tweaking.
|
||||
The learning rate affects how much "training" is done on the model per training step. It is a very careful balance to select a value that will learn your data. See [Advanced Tweaking](ATWEAKING.md) for more info. Once you have started, the learning rate is a good first knob to turn as you move into more advanced tweaking.
|
||||
|
||||
## Batch Size
|
||||
|
||||
|
@ -66,7 +66,7 @@ Batch size is also another "hyperparamter" of itself and there are tradeoffs. It
|
|||
|
||||
--batch_size 4 ^
|
||||
|
||||
While very small batch sizes can impact performance negatively, at some point larger sizes have little impact on overall speed as well, so shooting for the moon is not always advisable. Changing batch size may also impact what learning rate you use, with typically larger batch_size requiring a slightly higher learning rate. More info is provided in the [Advanced Tweaking](doc/ATWEAKING.md) document.
|
||||
While very small batch sizes can impact performance negatively, at some point larger sizes have little impact on overall speed as well, so shooting for the moon is not always advisable. Changing batch size may also impact what learning rate you use, with typically larger batch_size requiring a slightly higher learning rate. More info is provided in the [Advanced Tweaking](ATWEAKING.md) document.
|
||||
|
||||
## LR Scheduler
|
||||
|
||||
|
@ -74,7 +74,7 @@ A learning rate scheduler can change your learning rate as training progresses.
|
|||
|
||||
At this time, ED2.0 supports constant or cosine scheduler.
|
||||
|
||||
The constant scheduler is the default and keeps your LR set to the value you set in the command line. That's really it for constant! I recommend sticking with it until you are comfortable with general training. More info in the [Advanced Tweaking](doc/ATWEAKING.md) document.
|
||||
The constant scheduler is the default and keeps your LR set to the value you set in the command line. That's really it for constant! I recommend sticking with it until you are comfortable with general training. More info in the [Advanced Tweaking](ATWEAKING.md) document.
|
||||
|
||||
## AdamW vs AdamW 8bit
|
||||
|
||||
|
@ -98,4 +98,4 @@ Sample steps declares how often samples are generated and put into the logs and
|
|||
|
||||
--sample_steps 300 ^
|
||||
|
||||
Keep in mind if you drastically change your batch_size, the frequency (in time between samples) of samples will change. Going from batch size 2 to batch size 10 may reduce how fast steps process, so you may want to reduce sample_steps to compensate.
|
||||
Keep in mind if you drastically change your batch_size, the frequency (in time between samples) of samples will change. Going from batch size 2 to batch size 10 may reduce how fast steps process, so you may want to reduce sample_steps to compensate.
|
||||
|
|
2
train.py
2
train.py
|
@ -841,7 +841,7 @@ def update_old_args(t_args):
|
|||
t_args.__dict__["shuffle_tags"] = False
|
||||
|
||||
if __name__ == "__main__":
|
||||
supported_resolutions = [448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 1088, 1152]
|
||||
supported_resolutions = [256, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 1088, 1152]
|
||||
argparser = argparse.ArgumentParser(description="EveryDream2 Training options")
|
||||
argparser.add_argument("--config", type=str, required=False, default=None, help="JSON config file to load options from")
|
||||
args, _ = argparser.parse_known_args()
|
||||
|
|
Loading…
Reference in New Issue