EveryDream2trainer/README.md

3.3 KiB

EveryDream Trainer 2.0

Welcome to v2.0 of EveryDream trainer! Now with more Diffusers, faster, and even more features!

For the most up to date news and community discussions, please join us on Discord!

Discord!

If you find this tool useful, please consider subscribing to the project on Patreon or a one-time donation on Ko-fi. Your donations keep this project alive as a free open source tool with ongoing enhancements.

Patreon or Kofi.

If you're coming from Dreambooth, please read this for an explanation of why EveryDream is not Dreambooth.

Requirements

Windows 10/11, Linux (Ubuntu 20.04+ recommended), or use the linux Docker container

Python 3.10.x

Nvidia GPU with 11GB VRAM or more (note: 1080 Ti and 2080 Ti may require compiling xformers yourself)

16GB system RAM recommended minimum

Single GPU is currently supported

32GB of system RAM recommended for 50k+ training images, but may get away with sufficient swap file and 16GB

Ampere or newer 24GB+ (3090/A5000/4090, etc) recommended for 10k+ images

...Or use any computer with a web browser and run on Vast/Colab. See Cloud section below.

Video tutorials

Basic setup and getting started

Covers install, setup of base models, startning training, basic tweaking, and looking at your logs

Multiaspect and crop jitter explainer

Behind the scenes look at how the trainer handles multiaspect and crop jitter

Cloud/Docker

Free tier Google Colab notebook

* RunPod / Vast Instructions

* Vast.ai Video Tutorial

Runpod Video Tutorial

Docker image link

Docs

Setup and installation

Download and setup base models

Data Preparation

Training - How to start training

Troubleshooting

Basic Tweaking - Important args to understand to get started

Advanced Tweaking and Advanced Optimizer Tweaking

Chaining training sessions - Modify training parameters by chaining training sessions together end to end

Shuffling Tags

Data Balancing - Includes my small treatise on model "preservation" with additional ground truth data

Logging

Validation - Use a validation split on your data to see when you are overfitting and tune hyperparameters

Captioning - Llava, Cog, etc. to generate synthetic captioning (or Old scripts for git/blip)

Plugins - (beta) write your own plugins to execute arbitrary code during training

Contributing

Citations and references