added detailed installation instructions

fixed bug with missing same dir for a new install
added ctrl+c hander to immediately stop the program instead of waiting
This commit is contained in:
AUTOMATIC 2022-08-31 11:04:19 +03:00
parent 765d7bc6be
commit e38ad2ee95
3 changed files with 114 additions and 47 deletions

View File

@ -6,50 +6,77 @@ Original script with Gradio UI was written by a kind anonymous user. This is a m
![](screenshot.png)
## Installing and running
### Stable Diffusion
You need python and git installed to run this. I tested the installation to work with Python 3.8.10,
you may be able to run this on different versions.
This script assumes that you already have main Stable Diffusion sutff installed, assumed to be in directory `/sd`.
If you don't have it installed, follow the guide:
You need Stable Diffusion model checkpoint, a big file containing the neural network weights. You
can obtain it from the following places:
- [official download](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
- [file storage](https://drive.yerf.org/wl/?id=EBfTrmcCCUAGaQBXVIj5lJmEhjoP1tgl)
- [torrent](magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337)
- https://rentry.org/kretard
You optionally can use GPFGAN to improve faces, then you'll need to download the model from [here](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth).
This repository's `webgui.py` is a replacement for `kdiff.py` from the guide.
Instructions:
Particularly, following files must exist:
```commandline
:: crate a directory somewhere for stable diffusion and open cmd in it; below the directorty is assumed to be b:\src\sd
:: make sure you are in the right directory; the command must output b:\src\sd1
echo %cd%
- `/sd/configs/stable-diffusion/v1-inference.yaml`
- `/sd/models/ldm/stable-diffusion-v1/model.ckpt`
- `/sd/ldm/util.py`
- `/sd/k_diffusion/__init__.py`
:: install torch with CUDA support. See https://pytorch.org/get-started/locally/ for more instructions if this fails.
pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
### GFPGAN
:: check if torch supports GPU; this must output "True". You need CUDA 11. installed for this. You might be able to use
:: a different version, but this is what I tested.
python -c "import torch; print(torch.cuda.is_available())"
If you want to use GFPGAN to improve generated faces, you need to install it separately.
Follow instructions from https://github.com/TencentARC/GFPGAN, but when cloning it, do so into Stable Diffusion main directory, `/sd`.
After that download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it
into the `/sd/GFPGAN/experiments/pretrained_models` directory. If you're getting troubles with GFPGAN support, follow instructions
from the GFPGAN's repository until `inference_gfpgan.py` script works.
:: clone Stable Diffusion repositories
git clone https://github.com/CompVis/stable-diffusion.git
git clone https://github.com/CompVis/taming-transformers
The following files must exist:
:: install requirements of Stable Diffusion
pip install transformers==4.19.2 diffusers invisible-watermark
- `/sd/GFPGAN/inference_gfpgan.py`
- `/sd/GFPGAN/experiments/pretrained_models/GFPGANv1.3.pth`
:: install k-diffusion
pip install git+https://github.com/crowsonkb/k-diffusion.git
If the GFPGAN directory does not exist, you will not get the option to use GFPGAN in the UI. If it does exist, you will either be able
to use it, or there will be a message in console with an error related to GFPGAN.
:: (optional) install GFPGAN to fix faces
pip install git+https://github.com/TencentARC/GFPGAN.git
### Web UI
:: go into stable diffusion's repo directory
cd stable-diffusion
Run the script as:
:: clone web ui
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
`python webui.py`
:: install requirements of web ui
pip install -r stable-diffusion-webui/requirements.txt
When running the script, you must be in the main Stable Diffusion directory, `/sd`. If you cloned this repository into a subdirectory
of `/sd`, say, the `stable-diffusion-webui` directory, you will run it as:
:: (outside of command line) put stable diffusion model into models/ldm/stable-diffusion-v1/model.ckpt; you'll have
:: to create one missing directory;
:: the command below must output something like: 1 File(s) 4,265,380,512 bytes
dir models\ldm\stable-diffusion-v1\model.ckpt
`python stable-diffusion-webui/webui.py`
:: (outside of command line) put the GFPGAN model into same directory as webui script
:: the command below must output something like: 1 File(s) 348,632,874 bytes
dir stable-diffusion-webui\GFPGANv1.3.pth
```
After that the installation is finished.
Run the command to start web ui:
```
python stable-diffusion-webui/webui.py
```
If you have a 4GB video card, run the command with `--lowvram` argument:
```
python stable-diffusion-webui/webui.py --lowvram
```
When launching, you may get a very long warning message related to some weights not being used. You may freely ignore it.
After a while, you will get a message like this:
```

10
requirements.txt Normal file
View File

@ -0,0 +1,10 @@
basicsr
gfpgan
gradio
numpy
Pillow
realesrgan
torch
transformers
omegaconf
pytorch_lightning

View File

@ -1,8 +1,18 @@
import argparse
import os
import sys
from collections import namedtuple
from contextlib import nullcontext
script_path = os.path.dirname(os.path.realpath(__file__))
sd_path = os.path.dirname(script_path)
# add parent directory to path; this is where Stable diffusion repo should be
path_dirs = [(sd_path, 'ldm', 'Stable Diffusion'), ('../../taming-transformers', 'taming', 'Taming Transformers')]
for d, must_exist, what in path_dirs:
must_exist_path = os.path.abspath(os.path.join(script_path, d, must_exist))
if not os.path.exists(must_exist_path):
print(f"Warning: {what} not found at path {must_exist_path}", file=sys.stderr)
else:
sys.path.append(os.path.join(script_path, d))
import torch
import torch.nn as nn
@ -19,6 +29,9 @@ import html
import time
import json
import traceback
from collections import namedtuple
from contextlib import nullcontext
import signal
import k_diffusion.sampling
from ldm.util import instantiate_from_config
@ -33,7 +46,6 @@ gradio.utils.get_local_ip_address = lambda: '127.0.0.1'
mimetypes.init()
mimetypes.add_type('application/javascript', '.js')
script_path = os.path.dirname(os.path.realpath(__file__))
# some of those options should not be changed at all because they would break the model, so I removed them from options.
opt_C = 4
@ -44,9 +56,10 @@ invalid_filename_chars = '<>:"/\\|?*\n'
config_filename = "config.json"
parser = argparse.ArgumentParser()
parser.add_argument("--config", type=str, default="configs/stable-diffusion/v1-inference.yaml", help="path to config which constructs model",)
parser.add_argument("--ckpt", type=str, default="models/ldm/stable-diffusion-v1/model.ckpt", help="path to checkpoint of model",)
parser.add_argument("--config", type=str, default=os.path.join(sd_path, "configs/stable-diffusion/v1-inference.yaml"), help="path to config which constructs model",)
parser.add_argument("--ckpt", type=str, default=os.path.join(sd_path, "models/ldm/stable-diffusion-v1/model.ckpt"), help="path to checkpoint of model",)
parser.add_argument("--gfpgan-dir", type=str, help="GFPGAN directory", default=('./src/gfpgan' if os.path.exists('./src/gfpgan') else './GFPGAN'))
parser.add_argument("--gfpgan-model", type=str, help="GFPGAN model file name", default='GFPGANv1.3.pth')
parser.add_argument("--no-half", action='store_true', help="do not switch the model to 16-bit floats")
parser.add_argument("--no-progressbar-hiding", action='store_true', help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware accleration in browser)")
parser.add_argument("--max-batch-count", type=int, default=16, help="maximum batch count value for the UI")
@ -122,25 +135,34 @@ sd_upscalers = {
}
have_gfpgan = False
if os.path.exists(cmd_opts.gfpgan_dir):
try:
sys.path.append(os.path.abspath(cmd_opts.gfpgan_dir))
from gfpgan import GFPGANer
def gfpgan_model_path():
places = [script_path, '.', os.path.join(cmd_opts.gfpgan_dir, 'experiments/pretrained_models')]
files = [cmd_opts.gfpgan_model] + [os.path.join(dirname, cmd_opts.gfpgan_model) for dirname in places]
found = [x for x in files if os.path.exists(x)]
have_gfpgan = True
except:
print("Error importing GFPGAN:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
if len(found) == 0:
raise Exception("GFPGAN model not found in paths: " + ", ".join(files))
return found[0]
def gfpgan():
model_name = 'GFPGANv1.3'
model_path = os.path.join(cmd_opts.gfpgan_dir, 'experiments/pretrained_models', model_name + '.pth')
if not os.path.isfile(model_path):
raise Exception("GFPGAN model not found at path "+model_path)
return GFPGANer(model_path=gfpgan_model_path(), upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
have_gfpgan = False
try:
model_path = gfpgan_model_path()
if os.path.exists(cmd_opts.gfpgan_dir):
sys.path.append(os.path.abspath(cmd_opts.gfpgan_dir))
from gfpgan import GFPGANer
have_gfpgan = True
except Exception:
print("Error setting up GFPGAN:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
class Options:
@ -865,6 +887,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
seed = int(random.randrange(4294967294) if p.seed == -1 else p.seed)
sample_path = os.path.join(p.outpath, "samples")
os.makedirs(sample_path, exist_ok=True)
base_count = len(os.listdir(sample_path))
grid_count = len(os.listdir(p.outpath)) - 1
@ -1669,5 +1692,12 @@ demo = gr.TabbedInterface(
analytics_enabled=False,
)
# make the program just exit at ctrl+c without waiting for anything
def sigint_handler(signal, frame):
print('Interrupted')
os._exit(0)
signal.signal(signal.SIGINT, sigint_handler)
demo.queue(concurrency_count=1)
demo.launch()