You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
BlueBug e1e070ca83 Cough... fixed typo
me not trying to be a grammar villian but a helpful noob
2 weeks ago
.devcontainer Split requirements between build and runtime 11 months ago
.github update docker tags 3 months ago
cfgs update coco example cfg 1 year ago
data fix oopsie 2 months ago
doc add cog-vlm to citations 3 weeks ago
docker Misc minor fixes 4 months ago
examples flamingo 8 months ago
optimizer Using -1.0 instead of -1. 3 months ago
plugins fix issue with plugin interaction with validation 2 months ago
scripts add tokenizer tester script 3 months ago
test Fix undersize warning 5 months ago
utils fix reshaping for picking apps 3 months ago
.gitignore Add docker compose file 9 months ago
.pylintrc gitignore 1 year ago
CaptionFL.ipynb set highmem and t4 for colab 8 months ago
LICENSE update license for 2023 1 year ago
LICENSE_AGPL update license for 2023 1 year ago
README.md add some citations and references links 3 months ago
Train_Colab.ipynb Misc minor fixes 4 months ago
Train_JupyterLab.ipynb jupyter notebook fix to upload ldm safetensors instead of ckpt 3 months ago
activate_venv.bat hey look ed2 1 year ago
activate_venv.ps1 add ps activate script 2 months ago
caption.py BLIP2 was loading into system RAM rather than VRAM for me. I found that adding argument "device_map=device" forces it to load the entire model onto the specified device. I tested it for cuda and cpu successfully. 2 months ago
caption_cog.py Cough... fixed typo 2 weeks ago
caption_fl.py fix caption fl 8 months ago
caption_kosmos2.py remove errant print from kosmos script 4 months ago
chain.bat update ed1 mode 1 year ago
chain0.json chaining and more lowers resolutions 1 year ago
chain1.json chaining and more lowers resolutions 1 year ago
chain2.json chaining and more lowers resolutions 1 year ago
docker-compose.yml Add docker compose file 9 months ago
optimizer.json update windows to torch 2.1 and add attention type option 3 months ago
optimizerSD21.json updating reqs 4 months ago
optimizer_dadapt.json Update optimizer_dadapt.json 7 months ago
requirements-test.txt test reqs 10 months ago
requirements.txt removing open flamingo from requirements.txt 3 months ago
sample_prompts.json Update sample_prompts.json 1 year ago
sample_prompts.txt hey look ed2 1 year ago
train.json update windows to torch 2.1 and add attention type option 3 months ago
train.py prevent OOM with disabled unet when gradient checkpointing is enabled 1 month ago
trainSD21.json update ema sample args again 5 months ago
validation_default.json update docs for every_n_epochs 10 months ago
windows_setup.cmd removing open flamingo from standard installs and documenting, OF is broken in torch2.1 unfortunately 3 months ago

README.md

EveryDream Trainer 2.0

Welcome to v2.0 of EveryDream trainer! Now with more Diffusers, faster, and even more features!

For the most up to date news and community discussions, please join us on Discord!

Discord!

If you find this tool useful, please consider subscribing to the project on Patreon or a one-time donation on Ko-fi. Your donations keep this project alive as a free open source tool with ongoing enhancements.

Patreon or Kofi.

If you're coming from Dreambooth, please read this for an explanation of why EveryDream is not Dreambooth.

Requirements

Windows 10/11, Linux (Ubuntu 20.04+ recommended), or use the linux Docker container

Python 3.10.x

Nvidia GPU with 11GB VRAM or more (note: 1080 Ti and 2080 Ti may require compiling xformers yourself)

16GB system RAM recommended minimum

Single GPU is currently supported

32GB of system RAM recommended for 50k+ training images, but may get away with sufficient swap file and 16GB

Ampere or newer 24GB+ (3090/A5000/4090, etc) recommended for 10k+ images

...Or use any computer with a web browser and run on Vast/Colab. See Cloud section below.

Video tutorials

Basic setup and getting started

Covers install, setup of base models, startning training, basic tweaking, and looking at your logs

Multiaspect and crop jitter explainer

Behind the scenes look at how the trainer handles multiaspect and crop jitter

Companion tools repo

Make sure to check out the tools repo, it has a grab bag of scripts to help with your data curation prior to training. It has automatic bulk BLIP captioning for BLIP, script to web scrape based on Laion data files, script to rename generic pronouns to proper names or append artist tags to your captions, etc.

Cloud/Docker

Free tier Google Colab notebook

* RunPod / Vast Instructions

* Vast.ai Video Tutorial

Runpod Video Tutorial

Docs

Setup and installation

Download and setup base models

Data Preparation

Training - How to start training

Troubleshooting

Basic Tweaking - Important args to understand to get started

Advanced Tweaking and Advanced Optimizer Tweaking

Chaining training sessions - Modify training parameters by chaining training sessions together end to end

Shuffling Tags

Data Balancing - Includes my small treatise on model "preservation" with additional ground truth data

Logging

Validation - Use a validation split on your data to see when you are overfitting and tune hyperparameters

Captioning - (beta) tools to automate captioning

Plugins - (beta) write your own plugins to execute arbitrary code during training

Contributing

Citations and references