Go to file
Victor Hall 1fe3d9f4e5 adding filters for llava 2024-05-06 01:07:19 -04:00
.devcontainer Split requirements between build and runtime 2023-04-01 00:29:10 +02:00
.github update docker tags 2023-11-16 12:04:39 -05:00
cfgs update coco example cfg 2023-02-25 16:37:10 -05:00
data some cleanup/updates to caption stuff 2024-03-22 13:27:01 -04:00
doc doc for check images script 2024-05-05 20:52:54 -04:00
docker add peft to docker 2024-03-22 12:43:09 -04:00
examples flamingo 2023-06-29 18:12:52 -04:00
optimizer Using -1.0 instead of -1. 2023-11-17 21:55:12 +02:00
plugins add llava captioning 2024-05-04 22:24:24 -04:00
scripts script to check images now also does exif transpose in case 2024-05-05 20:03:09 -04:00
test Fix undersize warning 2023-10-05 01:48:09 -03:00
utils add llava captioning 2024-05-04 22:24:24 -04:00
.gitignore add huber loss, timestep clamping, slightly safer txt reading 2024-04-26 23:54:31 -04:00
.pylintrc gitignore 2022-12-17 22:34:07 -05:00
CaptionCog.ipynb caption cog notebook2 2024-03-24 10:00:00 -04:00
LICENSE update license for 2023 2023-01-27 13:59:02 -05:00
LICENSE_AGPL update license for 2023 2023-01-27 13:59:02 -05:00
README.md add huber loss, timestep clamping, slightly safer txt reading 2024-04-26 23:54:31 -04:00
Train_Colab.ipynb Update Train_Colab.ipynb 2024-02-23 11:33:09 -05:00
Train_JupyterLab.ipynb jupyter notebook fix to upload ldm safetensors instead of ckpt 2023-11-18 13:33:22 -05:00
activate_venv.bat hey look ed2 2022-12-17 22:32:48 -05:00
activate_venv.ps1 add ps activate script 2023-12-21 14:02:36 -05:00
caption.py BLIP2 was loading into system RAM rather than VRAM for me. I found that adding argument "device_map=device" forces it to load the entire model onto the specified device. I tested it for cuda and cpu successfully. 2024-01-06 15:16:59 -05:00
caption_cog.py adding filters for llava 2024-05-06 01:07:19 -04:00
caption_kosmos2.py bugfix and doc fix 2024-03-03 15:47:44 -05:00
chain.bat update ed1 mode 2023-01-09 13:44:51 -05:00
chain0.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain1.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
chain2.json chaining and more lowers resolutions 2023-01-08 18:52:39 -05:00
docker-compose.yml Add docker compose file 2023-05-28 12:42:57 -07:00
optimizer.json update windows to torch 2.1 and add attention type option 2023-11-24 16:13:01 -05:00
optimizerSD21.json updating reqs 2023-11-02 21:54:33 -04:00
optimizer_dadapt.json Update optimizer_dadapt.json 2023-08-04 22:13:52 -04:00
requirements-test.txt test reqs 2023-05-08 18:25:37 -04:00
requirements.txt update caption_cog.py, transformers, and add peft 2024-03-13 18:38:55 -04:00
sample_prompts.json Update sample_prompts.json 2023-02-27 19:57:08 -06:00
sample_prompts.txt hey look ed2 2022-12-17 22:32:48 -05:00
train.json add huber loss, timestep clamping, slightly safer txt reading 2024-04-26 23:54:31 -04:00
train.py add huber loss, timestep clamping, slightly safer txt reading 2024-04-26 23:54:31 -04:00
trainSD21.json update ema sample args again 2023-09-18 16:12:51 -04:00
validation_default.json update docs for every_n_epochs 2023-05-07 12:05:49 +02:00
windows_setup.cmd update windows setup cmd 2024-04-01 14:51:33 -04:00

README.md

EveryDream Trainer 2.0

Welcome to v2.0 of EveryDream trainer! Now with more Diffusers, faster, and even more features!

For the most up to date news and community discussions, please join us on Discord!

Discord!

If you find this tool useful, please consider subscribing to the project on Patreon or a one-time donation on Ko-fi. Your donations keep this project alive as a free open source tool with ongoing enhancements.

Patreon or Kofi.

If you're coming from Dreambooth, please read this for an explanation of why EveryDream is not Dreambooth.

Requirements

Windows 10/11, Linux (Ubuntu 20.04+ recommended), or use the linux Docker container

Python 3.10.x

Nvidia GPU with 11GB VRAM or more (note: 1080 Ti and 2080 Ti may require compiling xformers yourself)

16GB system RAM recommended minimum

Single GPU is currently supported

32GB of system RAM recommended for 50k+ training images, but may get away with sufficient swap file and 16GB

Ampere or newer 24GB+ (3090/A5000/4090, etc) recommended for 10k+ images

...Or use any computer with a web browser and run on Vast/Colab. See Cloud section below.

Video tutorials

Basic setup and getting started

Covers install, setup of base models, startning training, basic tweaking, and looking at your logs

Multiaspect and crop jitter explainer

Behind the scenes look at how the trainer handles multiaspect and crop jitter

Cloud/Docker

Free tier Google Colab notebook

* RunPod / Vast Instructions

* Vast.ai Video Tutorial

Runpod Video Tutorial

Docker image link

Docs

Setup and installation

Download and setup base models

Data Preparation

Training - How to start training

Troubleshooting

Basic Tweaking - Important args to understand to get started

Advanced Tweaking and Advanced Optimizer Tweaking

Chaining training sessions - Modify training parameters by chaining training sessions together end to end

Shuffling Tags

Data Balancing - Includes my small treatise on model "preservation" with additional ground truth data

Logging

Validation - Use a validation split on your data to see when you are overfitting and tune hyperparameters

Captioning - tools to generate synthetic captioning (recommend Cog)

Plugins - (beta) write your own plugins to execute arbitrary code during training

Contributing

Citations and references