doc and bug with undersized
This commit is contained in:
parent
3b085bdd28
commit
85f19b9a2f
|
@ -2,8 +2,6 @@
|
|||
|
||||
Welcome to v2.0 of EveryDream trainer! Now with more diffusers and even more features!
|
||||
|
||||
[Companion tools](https://github.com/victorchall/EveryDream)
|
||||
|
||||
Please join us on Discord! https://discord.gg/uheqxU6sXN
|
||||
|
||||
If you find this tool useful, please consider subscribing to the project on [Patreon](https://www.patreon.com/everydream) or a one-time donation at [Ko-fi](https://ko-fi.com/everydream).
|
||||
|
@ -17,7 +15,7 @@ Covers install, setup of base models, startning training, basic tweaking, and lo
|
|||
|
||||
Behind the scenes look at how the trainer handles multiaspect and crop jitter
|
||||
|
||||
### Tools repo
|
||||
### Companion tools repo
|
||||
|
||||
Make sure to check out the [tools repo](https://github.com/victorchall/EveryDream), it has a grab bag of scripts to help with your data curation prior to training. It has automatic bulk BLIP captioning for BLIP, script to web scrape based on Laion data files, script to rename generic pronouns to proper names or append artist tags to your captions, etc.
|
||||
|
||||
|
@ -42,3 +40,8 @@ Make sure to check out the [tools repo](https://github.com/victorchall/EveryDrea
|
|||
[Shuffling Tags](doc/SHUFFLING_TAGS.md)
|
||||
|
||||
[Data Balancing](doc/BALANCING.md) - Includes my small treatise on model preservation with ground truth data
|
||||
|
||||
## Cloud
|
||||
|
||||
[Free tier Google Colab notebook](https://colab.research.google.com/github/victorchall/EveryDream2trainer/blob/main/Train_Colab.ipynb)
|
||||
|
||||
|
|
|
@ -358,7 +358,7 @@ class ImageTrainItem:
|
|||
image_aspect = width / height
|
||||
target_wh = min(self.aspects, key=lambda aspects:abs(aspects[0]/aspects[1] - image_aspect))
|
||||
|
||||
self.is_undersized = width * height < target_wh[0] * target_wh[1]
|
||||
self.is_undersized = (width * height) < (target_wh[0] * target_wh[1])
|
||||
self.target_wh = target_wh
|
||||
except Exception as e:
|
||||
self.error = e
|
||||
|
|
|
@ -44,9 +44,9 @@ The value is defaulted at 0.04, which means 4% conditional dropout. You can set
|
|||
|
||||
## LR tweaking
|
||||
|
||||
Learning rate adjustment is a very important part of training. You can use the default settings, or you can tweak it. Currently the default is 3e-6, which is higher than EveryDream1 which was defaulted to 1e-6, based on the results of testing and the ability to use larger batch sizes. You should consider increasing this further if you increase your batch size further (10+) using [gradient checkpointing](#gradient_checkpointing).
|
||||
Learning rate adjustment is a very important part of training. You can use the default settings, or you can tweak it. You should consider increasing this further if you increase your batch size further (10+) using [gradient checkpointing](#gradient_checkpointing).
|
||||
|
||||
--lr 3e-6 ^
|
||||
--lr 1.5e-6 ^
|
||||
|
||||
By default, the learning rate is constant for the entire training session. However, if you want it to change by itself during training, you can use cosine.
|
||||
|
||||
|
|
Loading…
Reference in New Issue