add extended Validation docs

This commit is contained in:
Damian Stewart 2023-02-11 02:33:33 +01:00
parent eea15934ca
commit a0d844e324
1 changed files with 66 additions and 8 deletions

View File

@ -2,15 +2,64 @@
*This documentation is incomplete. Please feel free to contribute to it.*
Validation allows you to use a split of your data for evaluating your training progress.
To evaluate your training progress, EveryDream2 has a Validation feature that acts like an independent check about how well the training is generalising from the training data, vs how much it is just learning to reproduce the training data ("overfitting").
When training a specific class, setting aside a portion of the data for validation will allow you to see trend lines you cannot see when purely looking at loss of the training itself.
To do this, your data is split into `val` and `train` sets of captioned images. Training proceeds is done as normal using only the captioned images in the `train` dataset. At regular intervals (usually at the end of every epoch but you can adjust this in your [validation config file](#how-to-configure-validation)) the `val` dataset will be run through the model, producing a "loss" value that shows how well the model can apply what it has learnt to data that it was not trained on.
While loss on your training data should trend downward, if you set aside a validation set, you can see when your validation loss starts to trend upward. This is a sign that you are overfitting. You can then adjust your hyperparameters to reduce overfitting, such as reducing LR or reducing training epochs.
The validation system also offers a way to "stabilize" the training loss. You may be already used to seeing loss graphs like this on Tensorboard or wandb:
## How to use validation
![Two noisy loss graphs](validation/basic-losses.png)
The `validation_config` option is a pointer to a JSON config file with settings for use in validation. There is a default validation file `validation_default.json` in the repo root, but it is not used unless you specify it.
With `stabilize_training_loss` set to `true` in your `validation_config.json` file, you will also see the following graph, taken from the same training session:
![A loss graph trending steadily downwards](validation/train-stabilized.png). This graph shows a model that is very steadily learning from its training data, trending nicely and clearly downwards in a way that is not visible from `loss/epoch` or `loss/log_step`.
## How does Validation Help?
Validation does require that you sacrifice some of your dataset (by default, about 15% of it) and you may be asking, "what do I get for that?". What you get is a way of seeing an estimate about the state of a training run at a glance. I'll explain how using the following graph. Like the other graphs on this page, this was taken from a real training session with the Ted Bennett dataset.
![a graph labelled 'loss/val' that trends downwards, levels off then starts to rise](validation/validation-losses.png)
The training proceeds rapidly over the first 50 steps, with the model quickly getting better at applying what it has learnt from the `train` dataset to the `val` dataset. At this point, however it levels off, and stays flat for another 75 steps, before starting to rise until the training was stopped just after step 150. The three parts of this graph - the fall, the flat part, and the rise - represent three phases of training the model.
**In the first phase**, the model quickly learns how to generate images of a bear that looks roughly like Ted Bennett:
| step 5 | step 17 | step 29 | step 41 | step 53 |
|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| ![](validation/tb_04_20230128-222200/samples/gs00005-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00017-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00029-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00041-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00053-1-ted bennett in a grey suit and tie with his arms spread out.jpg) |
(In all of these images the prompt is `ted bennett in a grey suit and tie with his arms spread out`.)
**In the second phase**, the model is what I call "churning" - it is no longer learning a lot of new information, but is instead circling around the optimal position. Although the loss is no longer substantially decreasing, the model might still be learning new information:
| step 65 | step 77 | step 89 | step 101 | step 113 |
|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| ![](validation/tb_04_20230128-222200/samples/gs00065-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00077-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00089-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00101-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00113-1-ted bennett in a grey suit and tie with his arms spread out.jpg) |
**In the third phase**, the model is starting to overfit: edges start to get a bit weird, the saturation becomes stronger; everything feels a whole lot more *intense* somehow:
| step 125 | step 137 | step 149 | step 161 | step 173 |
|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| ![](validation/tb_04_20230128-222200/samples/gs00125-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00137-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00149-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00161-1-ted bennett in a grey suit and tie with his arms spread out.jpg) | ![](validation/tb_04_20230128-222200/samples/gs00173-1-ted bennett in a grey suit and tie with his arms spread out.jpg) |
When the model starts looking like this that means you've gone past the best point and it's time to stop. Normally you would detect this point by watching the outputs, but you can also watch the `loss/val` graph and try and stop the training before it starts to become unstable or trend upward.
## Validation vs training losses
Here are the `train-stabilized` and `loss` graphs again:
| train-stabilized | val |
|--------------------------------------|---------------------------------------|
| ![](validation/train-stabilized.png) | ![](validation/validation-losses.png) |
Once again, these are taken from a real training run with the Ted Bennett dataset - the same run that produced the images above. You can see that although the `train-stabilized` graph tracks steadily downwards, and looks as though it's doing the right thing constantly, the `val` graph tells a different story. The shape of the val graph can show you in a compact way just where your training is.
The best time to stop training is some time in the flat area after the easy learning has completed - on the graphs above, somewhere between step 50 and around step 125. It might be best to stop before step 100, or it could be better to give it a few more rounds and stop at step 125. To know for sure you'd need to look at more sample outputs and ideally try out the trained model in your web UI of choice.
Because the flat "churn" period may be very long, it can be hard to tell when you're about to start "frying" your output. Becoming familiar with how the validation curve looks with your model will help. Try different learning rates and see how the training responds - you may find you want to dial the learning rate back a bit, so that the `val` graph looks more like the `train-stabilized` graph. Or you may find your dataset needs to be given a sharp kick at the start and then left to churn for a while. Try out different values, compare the graph to the results -- and please share your findings with me @damian0815 on the EveryDream discord, I'd be very happy to incorporate anything you find into this documentation.
## How to configure validation
`train.py` has a `validation_config` option that can be set either as a CLI argument or in the config file. To enable validation, set this option to the path to a JSON file containing validation options. There is a default validation file `validation_default.json` in the repo root, but it is not used unless you specify it.
CLI use:
@ -20,8 +69,17 @@ or in a config file:
"validation_config": "validation_default.json"
## Logging and intepreting validation
### Validation config settings
Validation adds `loss/val` to your tensorboard logs. This is the loss of the validation data. Since this is separated from your training data, when it starts to trend upward you know you are overfitting.
The config file has the following options:
Additional notes are available here: https://github.com/victorchall/EveryDream2trainer/pull/36
* `validate_training`: If `true`, validate the training using a separate set of image/caption pairs, and log the results as `loss/val`. The curve will trend downwards as the model trains, then flatten and start to trend upwards as effective training finishes and the model begins to overfit the training data. Very useful for preventing overfitting, for checking if your learning rate is too low or too high, and for deciding when to stop training.
* `val_split_mode`: Either `automatic` or `manual`, ignored if validate_training is false.
* `automatic` val_split_mode picks a random subset of the training set (the number of items is controlled by `val_split_proportion`) and removes them from training to use as a validation set.
* `manual` val_split_mode lets you provide your own folder of validation items (images+captions), specified using `val_data_root`.
* `val_split_proportion`: For `automatic` val_split_mode, how much of the train dataset that should be removed to use for validation. Typical values are 0.15-0.2 (15-20% of the total dataset). Higher is more accurate but slower.
* `val_data_root`: For `manual` val_split_mode, the path to a folder containing validation items.
* `stabilize_training_loss`: If `true`, re-evaluate the training loss normally logged as `loss/epoch` and `loss/log step` using a fixed random seed and log the results as `loss/train-stabilized`. This more clearly shows the training progress, but it is not enough alone to tell you if you're overfitting.
* `stabilize_split_proportion`: If `stabilize_training_loss` is `true`, the proportion of the train dataset to overlap for stabilizing the train loss graph. Typical values are 0.15-0.2 (15-20% of the total dataset). Higher is more accurate but slower.
* `every_n_epochs`: How often to run validation (1=every epoch).
* `seed`: The seed to use when running validation passes, and for picking subsets of the data to use with `automatic` val_split_mode and/or `stabilize_training_loss`.