Go to file
Victor Hall 4c996cb6b5
Update README.md
2023-03-08 23:47:33 -05:00
.devcontainer Streamline Vast/Runpod docker 2023-02-27 02:50:57 +01:00
.github Streamline Vast/Runpod docker 2023-02-27 02:50:57 +01:00
cfgs update coco example cfg 2023-02-25 16:37:10 -05:00
data Merge branch 'main' of https://github.com/victorchall/EveryDream2trainer 2023-03-04 15:09:31 -05:00
doc document new sample generator params 2023-03-02 23:12:47 +01:00
docker Streamline Vast/Runpod docker 2023-02-27 02:50:57 +01:00
scripts early iter on txt2img script for testing on remote instances 2023-01-23 16:51:00 -05:00
test Add docker build action 2023-02-23 02:02:56 +01:00
utils if progress bars are disabled, log a short message instead 2023-03-03 10:50:48 +01:00
.gitignore Streamline Vast/Runpod docker 2023-02-27 02:50:57 +01:00
.pylintrc gitignore 2022-12-17 22:34:07 -05:00
LICENSE update license for 2023 2023-01-27 13:59:02 -05:00
LICENSE_AGPL update license for 2023 2023-01-27 13:59:02 -05:00
README.md Update README.md 2023-03-08 23:47:33 -05:00
Train_Colab.ipynb Created using Colaboratory 2023-02-28 09:35:12 -06:00
Train_JupyterLab.ipynb Streamline Vast/Runpod docker 2023-02-27 02:50:57 +01:00
activate_venv.bat hey look ed2 2022-12-17 22:32:48 -05:00
chain.bat update ed1 mode 2023-01-09 13:44:51 -05:00
chain0.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain1.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain2.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
optimizer.json documentation of new text encoder LR and aspect_ratio settings 2023-03-02 22:36:11 +01:00
sample_prompts.json Update sample_prompts.json 2023-02-27 19:57:08 -06:00
sample_prompts.txt hey look ed2 2022-12-17 22:32:48 -05:00
train.json fix mem leak on huge data, rework optimizer to separate json, add lion optimizer 2023-02-25 15:05:22 -05:00
train.py log to logging.info instead of stdout 2023-03-03 09:52:44 +01:00
trainSD21.json sd21 example json config 2023-01-26 12:07:28 -05:00
validation_default.json consistent spelling 2023-02-07 18:21:05 +01:00
windows_setup.cmd add lion to windows install 2023-02-25 16:36:04 -05:00

README.md

EveryDream Trainer 2.0

Welcome to v2.0 of EveryDream trainer! Now with more diffusers and even more features!

Please join us on Discord! https://discord.gg/uheqxU6sXN

If you find this tool useful, please consider subscribing to the project on Patreon or a one-time donation at Ko-fi.

If you're coming from Dreambooth, please read this for an explanation of why EveryDream is not Dreambooth.

Requirements

Windows 10/11, Linux (Ubuntu 20.04+ recommended), or use the linux Docker container

Python 3.10.x

Nvidia GPU with 11GB VRAM or more (note: 1080 Ti and 2080 Ti may require compiling xformers yourself)

16GB system RAM recommended minimum

Single GPU is currently supported

32GB of system RAM recommended for 50k+ training images, but may get away with sufficient swap file and 16GB

Ampere or newer 24GB+ (3090/A5000/4090, etc) recommended for 10k+ images unless you want to wait a long time

...Or use any computer with a web browser and run on Vast/Runpod/Colab. See Cloud section below.

Video tutorials

Basic setup and getting started

Covers install, setup of base models, startning training, basic tweaking, and looking at your logs

Multiaspect and crop jitter

Behind the scenes look at how the trainer handles multiaspect and crop jitter

Companion tools repo

Make sure to check out the tools repo, it has a grab bag of scripts to help with your data curation prior to training. It has automatic bulk BLIP captioning for BLIP, script to web scrape based on Laion data files, script to rename generic pronouns to proper names or append artist tags to your captions, etc.

Docs

Setup and installation

Download and setup base models

Data Preparation

Training - How to start training

Basic Tweaking - Important args to understand to get started

Logging

Advanced Tweaking - More stuff to tweak once you are comfortable

Advanced Optimizer Tweaking - Even more stuff to tweak if you are very adventurous

Chaining training sessions - Modify training parameters by chaining training sessions together end to end

Shuffling Tags

Data Balancing - Includes my small treatise on model preservation with ground truth data

Validation - Use a validation split on your data to see when you are overfitting and tune hyperparameters

Troubleshooting

Cloud

Free tier Google Colab notebook

RunPod / Vast

Docker image link