Advanced fine tuning tools for vision models
Go to file
Victor Hall 51edb0c568
Signed-off-by: Victor Hall <>
2023-05-30 12:48:44 -04:00
.github Update FUNDING.yml 2022-11-04 13:57:26 -04:00
demo demo image, update gitignore 2022-10-31 13:40:04 -04:00
doc Update 2023-05-23 16:40:48 -04:00
input gitkeep on input 2022-11-11 14:01:40 -05:00
laion fix requirements.txt and environment.yaml 2022-10-22 22:17:04 -04:00
output gitkeep on output 2022-11-11 14:02:13 -05:00
scripts Update 2023-05-30 12:48:44 -04:00
.gitignore add vscode to gitignore 2022-11-21 16:23:13 -05:00
AutoCaption.ipynb add missing aiofiles req to autocaption notebook 2022-11-16 22:50:55 -05:00
EveryDream_Tools.ipynb Merge pull request #19 from nawnie/main 2022-12-16 10:08:42 -08:00
LICENSE working script for laion search and download 2022-10-18 22:56:38 -04:00
README.MD Update README.MD 2023-04-01 02:13:50 -04:00
activate_venv.bat big update, adding auto-captioning 2022-10-30 21:59:26 -04:00
clip_rename.bat file renamer and some general updates 2022-11-01 20:02:54 -04:00
create_venv.bat big update, adding auto-captioning 2022-10-30 21:59:26 -04:00
deactivate_venv.bat big update, adding auto-captioning 2022-10-30 21:59:26 -04:00
environment.yaml bug in file renamer 2022-11-11 00:37:12 -05:00
requirements.txt update 2022-11-21 16:22:31 -05:00


EveryDream Tools

This repo will contain tools for data engineering efforts for people interested in taking their fine tuning beyond the initial DreamBooth paper implementations for Stable Diffusion, and may be useful for other image projects.

If you are looking for trainers, check out EveryDream 2.0. This is just a toolkit repo for data work but works in concert with that trainer.

For instance with Stable Diffusion, by using ground truth Laion data mixed in with training data to replace "regularization" images, together with clip-interrogated captioning or original TEXT caption from laion, or human-geneated labels, training quality can be improved. These are a significant steps towards towards full fine tuning capabilities.

Captioned training together with regularization has enabled multi-subject and multi-style training at the same time, and can scale to larger training efforts.

As an example project, you can download a large scale model for Final Fantasy 7 Remake here: and be sure to also follow up on the gist link at the bottom for more information along with links to example output of a multi-model fine tuning.

Join the EveryDream discord here:


Download scrapes using Laion - Web scrapes images off the web using Laion data files (runs on CPU).

Auto Captioning - Uses BLIP interrogation to caption images for training (includes colab notebook, needs minimal GPU).

File renaming - Simple script for replacing generic pronouns that come out of clip in filenames with proper names (ex "a man" -> "john doe", "a person" -> "jane doe").

See clip_rename.bat for an example to chain captioning and renaming together.

Compress images - Compresses images to WEBP with a given size (ex 1.5 megapixels) to reduce disk usage if you've downloaded some massive PNG data sets (ex. FFHQ) and wish to save some disk space.

Training (separate repo) - Fine tuning

Image Caption GUI and Video frame extractor courtesy of MStevenson

General Tools Notebook Collection of various tools in this codebase by Nawnie if you prefer to use Notebook GUI instead of the command line.


You can use conda or venv. This was developed on Python 3.10.5 but may work on older newer versions.

One step venv setup:


Don't forget to activate every time you open the command prompt later.


To use conda instead of venv:

conda env create -f environment.yaml

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url

git clone scripts/BLIP

conda activate everydream

Or you if you wish to configure your own venv, container/WSL, or Linux:

pip install -r requirements.txt

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url

git clone scripts/BLIP

Thanks to the SalesForce team for the BLIP tool. It uses CLIP to produce sane sentences like you would expect to see in alt-text.