Update README.md

This commit is contained in:
Victor Hall 2023-04-30 19:35:32 -04:00 committed by GitHub
parent 8c6e8157e9
commit 08376d0b1c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 3 additions and 3 deletions

View File

@ -1,10 +1,10 @@
# **EveryDream 2.0 is now live** https://github.com/victorchall/EveryDream2trainer # **EveryDream 2.0 is now live** https://github.com/victorchall/EveryDream2trainer
The updated 2.0 repo is a from-scratch rewrite to allow easier future development and will have more features long term and lower VRAM use. The updated 2.0 repo is a from-scratch rewrite with significant improvements across the board including greatly increased performance and feature set.
Below is the docs for EveryDream 1.0 (this repo). Below is the docs for EveryDream 1.0 (this repo), but really, use the new repo above.
# Every Dream trainer for Stable Diffusion ## Every Dream trainer for Stable Diffusion
This is a bit of a divergence from other fine tuning methods out there for Stable Diffusion. This is a general purpose fine-tuning codebase meant to bridge the gap from small scales (ex Texual Inversion, Dreambooth) and large scale (i.e. full fine tuning on large clusters of GPUs). It is designed to run on a local 24GB Nvidia GPU, currently the 3090, 3090 Ti, 4090, or other various Quadrios and datacenter cards (A5500, A100, etc), or on Runpod with any of those GPUs. This is a bit of a divergence from other fine tuning methods out there for Stable Diffusion. This is a general purpose fine-tuning codebase meant to bridge the gap from small scales (ex Texual Inversion, Dreambooth) and large scale (i.e. full fine tuning on large clusters of GPUs). It is designed to run on a local 24GB Nvidia GPU, currently the 3090, 3090 Ti, 4090, or other various Quadrios and datacenter cards (A5500, A100, etc), or on Runpod with any of those GPUs.