This commit is contained in:
Victor Hall 2022-12-17 22:38:39 -05:00
parent 7172dff413
commit 335248809c
3 changed files with 2 additions and 90 deletions

View File

@ -1,24 +0,0 @@
# Installation
## Windows
* Open a normal windows command prompt and run `windows_setup.bat` from the command line.
*Do **not** double click the file from Windows File Explorer*, you need the command window open.
* While that is running, download the official xformers windows wheel from this URL:
https://github.com/facebookresearch/xformers/suites/9544395581/artifacts/454051141
* Unzip the xformers file to the EveryDream2 folder
* Check your command line window to make sure no errors occured. If you have errors, please post them in the Discord and ask for assistance.
* Once the command line is done with no errors, paste this command into the command prompt:
`pip install xformers-0.0.15.dev0+303e613.d20221128-cp310-cp310-win_amd64.whl`
* When you want to train in the future after closing the command line, run `activate_venv.bat` from the command line to activate the virtual environment again. (hint: you can type `a` then press tab, then press enter)
## Next step
Read the documentation to setup your base models from which you will train.
[Base Model setup](doc/BASEMODELS.md)

View File

@ -17,78 +17,14 @@ Clone the repo from normal command line then change into the directory:
cd EveryDream-trainer2
## Download models
You need some sort of base model to start training. I suggest these two:
Stable Diffusion 1.5 with improved VAE:
https://huggingface.co/panopstor/EveryDream/blob/main/sd_v1-5_vae.ckpt
SD2.1 768:
https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-nonema-pruned.ckpt
You can use SD2.0 512 as well, but typically SD1.5 is going to be better.
https://huggingface.co/stabilityai/stable-diffusion-2-base/blob/main/512-base-ema.ckpt
Place these in the root folder of EveryDream2.
Run these commands *one time* to prepare them:
For SD1.x models, use this:
python utils/convert_original_stable_diffusion_to_diffusers.py --scheduler_type ddim ^
--original_config_file v1-inference.yaml ^
--image_size 768 ^
--checkpoint_path sd_v1-5_vae.ckpt ^
--prediction_type epsilon ^
--upcast_attn False ^
--pipeline_type FrozenOpenCLIPEmbedder ^
--dump_path "ckpt_cache/sd_v1-5_vae"
And the SD2.1 768 model:
python utils/convert_original_stable_diffusion_to_diffusers.py --scheduler_type ddim ^
--original_config_file v2-inference-v.yaml ^
--image_size 768 ^
--checkpoint_path v2-1_768-ema-pruned.ckpt ^
--prediction_type v_prediction ^
--upcast_attn False ^
--pipeline_type FrozenOpenCLIPEmbedder ^
--dump_path "ckpt_cache/v2-1_768-ema-pruned"
And finally the SD2.0 512 base model (generally not recommended base model):
python utils/convert_original_stable_diffusion_to_diffusers.py --scheduler_type ddim ^
--original_config_file v2-inference.yaml ^
--image_size 768 ^
--checkpoint_path 512-base-ema.ckpt ^
--prediction_type epsilon ^
--upcast_attn False ^
--pipeline_type FrozenOpenCLIPEmbedder ^
--dump_path "ckpt_cache/512-base-ema"
If you have other models, you need to know the base model that was used for them, in particular use the correct yaml (original_config_file) or it will not properly convert.
All of the above is one time. After running, you will use --resume_ckpt and just name the file without "ckpt_cache"
ex.
python train.py --resume_ckpt "sd_v1-5_vae" ...
python train.py --resume_ckpt "v2-1_768-ema-pruned" ...
python train.py --resume_ckpt "512-base-ema" ...
## Windows
Run windows_setup.bat to create your venv and install dependencies.
windows_setup.bat
windows_setup.cmd
## Linux, Linux containers, or WSL
## Linux, Linux containers, WSL, Runpod, etc
TBD

BIN
doc/ckptcache.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB