Go to file
Victor Hall 05cf8a8215
Merge pull request #266 from scottshireman/patch-1
Update sample_generator.py
2024-12-18 15:26:45 -05:00
.devcontainer Split requirements between build and runtime 2023-04-01 00:29:10 +02:00
.github Update docker-publish.yml 2024-08-24 16:08:37 -04:00
cfgs update coco example cfg 2023-02-25 16:37:10 -05:00
data some cleanup/updates to caption stuff 2024-03-22 13:27:01 -04:00
doc Update ADVANCED_TWEAKING.md 2024-09-25 09:54:10 -04:00
docker update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00
examples flamingo 2023-06-29 18:12:52 -04:00
optimizer Using -1.0 instead of -1. 2023-11-17 21:55:12 +02:00
plugins add exclude_keys option for from_image_json prompt plugin 2024-06-21 18:23:19 -04:00
scripts update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00
test Fix undersize warning 2023-10-05 01:48:09 -03:00
utils Update sample_generator.py 2024-12-03 08:47:28 -05:00
.gitignore doc and aider ignore 2024-06-18 18:24:23 -04:00
.pylintrc gitignore 2022-12-17 22:34:07 -05:00
CaptionCog.ipynb caption cog notebook2 2024-03-24 10:00:00 -04:00
LICENSE update license for 2023 2023-01-27 13:59:02 -05:00
LICENSE_AGPL update license for 2023 2023-01-27 13:59:02 -05:00
README.md Update README.md 2024-08-04 11:42:14 -04:00
Train_Colab.ipynb Update Train_Colab.ipynb 2024-02-23 11:33:09 -05:00
Train_JupyterLab.ipynb jupyter notebook fix to upload ldm safetensors instead of ckpt 2023-11-18 13:33:22 -05:00
activate_venv.bat hey look ed2 2022-12-17 22:32:48 -05:00
activate_venv.ps1 add ps activate script 2023-12-21 14:02:36 -05:00
caption.py default to cogvlm 2024-07-04 13:07:48 -04:00
caption_blipgit.py renaming old caption script to blipgit 2024-06-21 18:24:43 -04:00
caption_kosmos2.py bugfix and doc fix 2024-03-03 15:47:44 -05:00
chain.bat update ed1 mode 2023-01-09 13:44:51 -05:00
chain0.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain1.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain2.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
docker-compose.yml Add docker compose file 2023-05-28 12:42:57 -07:00
optimizer.json update windows to torch 2.1 and add attention type option 2023-11-24 16:13:01 -05:00
optimizerSD21.json updating reqs 2023-11-02 21:54:33 -04:00
optimizer_dadapt.json Update optimizer_dadapt.json 2023-08-04 22:13:52 -04:00
requirements-test.txt test reqs 2023-05-08 18:25:37 -04:00
requirements.txt update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00
sample_prompts.json Update sample_prompts.json 2023-02-27 19:57:08 -06:00
sample_prompts.txt hey look ed2 2022-12-17 22:32:48 -05:00
train.json update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00
train.py update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00
trainSD21.json update ema sample args again 2023-09-18 16:12:51 -04:00
validation_default.json update docs for every_n_epochs 2023-05-07 12:05:49 +02:00
windows_setup.cmd update caption to work with cog2 and glm-9v, add embedding_perturbation 2024-06-09 01:26:23 -04:00

README.md

EveryDream Trainer 2.0

Welcome to v2.0 of EveryDream trainer! Now with more Diffusers, faster, and even more features!

For the most up to date news and community discussions, please join us on Discord!

Discord!

If you find this tool useful, please consider subscribing to the project on Patreon or a one-time donation on Ko-fi. Your donations keep this project alive as a free open source tool with ongoing enhancements.

Patreon or Kofi.

If you're coming from Dreambooth, please read this for an explanation of why EveryDream is not Dreambooth.

Requirements

Windows 10/11, Linux (Ubuntu 20.04+ recommended), or use the linux Docker container

Python 3.10.x

Nvidia GPU with 11GB VRAM or more (note: 1080 Ti and 2080 Ti may require compiling xformers yourself)

16GB system RAM recommended minimum

Single GPU is currently supported

32GB of system RAM recommended for 50k+ training images, but may get away with sufficient swap file and 16GB

Ampere or newer 24GB+ (3090/A5000/4090, etc) recommended for 10k+ images

...Or use any computer with a web browser and run on Vast/Colab. See Cloud section below.

Video tutorials

Basic setup and getting started

Covers install, setup of base models, startning training, basic tweaking, and looking at your logs

Multiaspect and crop jitter explainer

Behind the scenes look at how the trainer handles multiaspect and crop jitter

Cloud/Docker

Free tier Google Colab notebook

* RunPod / Vast Instructions

* Vast.ai Video Tutorial

Runpod Video Tutorial

Docker image link

Docs

Setup and installation

Download and setup base models

Data Preparation

Training - How to start training

Troubleshooting

Basic Tweaking - Important args to understand to get started

Advanced Tweaking and Advanced Optimizer Tweaking

Chaining training sessions - Modify training parameters by chaining training sessions together end to end

Shuffling Tags

Data Balancing - Includes my small treatise on model "preservation" with additional ground truth data

Logging

Validation - Use a validation split on your data to see when you are overfitting and tune hyperparameters

Captioning - Llava, Cog, etc. to generate synthetic captioning (or Old scripts for git/blip)

Plugins - (beta) write your own plugins to execute arbitrary code during training

Contributing

Citations and references