EveryDream2trainer/installers/Runpod.ipynb

193 lines
6.5 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "2c831b5b-3025-4177-bef5-25aaec89573a",
"metadata": {},
"source": [
"## Every Dream v2 RunPod Installer\n",
"\n",
"[General Instructions](https://github.com/victorchall/EveryDream2trainer/blob/main/README.md)\n",
"\n",
"You can sign up for Runpod here (shameless referral link): [Runpod](https://runpod.io/?ref=oko38cd0)\n",
"\n",
"### Usage\n",
"\n",
"1. Prepare your training data before you begin (see below)\n",
"2. Spin the `RunPod Stable Diffusion v2.1` template. The `RunPod PyTorch` template does not work due to an old version of Python. \n",
"3. Open this notebook with `File > Open from URL...` pointing to `https://raw.githubusercontent.com/victorchall/EveryDream2trainer/main/installers/Runpod.ipynb`\n",
"4. Run each cell below once, noting any instructions above the cell (the first step requires a pod restart)\n",
"5. Figure out how you want to tweak the process next\n",
"6. Rinse, Repeat\n",
"\n",
"#### A note on storage\n",
"\n",
"Remember, on RunPod time is more expensive than storage. \n",
"\n",
"Which is good, because running a lot of experiments can generate a lot of data. Not having the right save points to recover quickly from inevitable mistakes will cost you a lot of time.\n",
"\n",
"When in doubt, give yourself ~125GB of Runpod **Volume** storage.\n",
"\n",
"#### Are you ready?\n",
"\n",
"You will want to have your data prepared before starting, and have a rough training plan in mind. \n",
"\n",
"**Don't waste rental fees if you're not fully prepared to start training.**"
]
},
{
"cell_type": "markdown",
"id": "9cc4250a-bd89-4623-a188-7bb9fd3b99ec",
"metadata": {},
"source": [
"## Install EveryDream"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb6d14b7-3c37-4ec4-8559-16b4e9b8dd18",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"%cd /workspace\n",
"\n",
"if not os.path.exists(\"EveryDream2trainer\"):\n",
" !git clone https://github.com/victorchall/EveryDream2trainer\n",
"\n",
"%cd EveryDream2trainer\n",
"%mkdir input\n",
"%run utils/get_yamls.py\n",
"\n",
"!echo pass > /workspace/stable-diffusion-webui/relauncher.py"
]
},
{
"cell_type": "markdown",
"id": "5123d4e6-281c-4475-99fd-328f4d5df734",
"metadata": {},
"source": [
"### Check your VRAM\n",
"If you see `22000 MB` or lower, then trash your pod and pick an A5000/3090 or better pod next time\n",
"\n",
"If you see `24576 MB` or higher you are good to go, but notice that there are `3500 MB` being taken up by Automatic 1111.\n",
"\n",
"Simply killing the web-ui won't free up that VRAM, but fortunately we added a hack to disable it above.\n",
"\n",
"Unfortunately it will require a pod restart once everything is installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0902e735",
"metadata": {},
"outputs": [],
"source": [
"!grep Swap /proc/meminfo\n",
"!swapon -s\n",
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"id": "0bf1e8cd",
"metadata": {},
"source": [
"## Upload training files\n",
"\n",
"Ues the navigation on the left to open the ** \"workspace / EveryDream2trainer / input\"** and upload your training files using the **up arrow button** above the file explorer, or by dragging and dropping the files from your local machine onto the file explorer.\n",
"\n",
"If you have many training files, or nested folders of training data, create a zip archive of your training data, upload this file to the input folder, then right click on the zip file and select \"Extract Archive\".\n",
"\n",
"### Optional - Configure sample prompts\n",
"You can set your own sample prompts by adding them, one line at a time, to sample_prompts.txt.\n",
"\n",
"Keep in mind a longer list of prompts will take longer to generate. You may also want to adjust you sample_steps in the training notebook to a different value to get samples left often. This is probably a good idea when training a larger dataset that you know will take longer to train, where more frequent samples will not help you.\n",
"\n",
"## While your training data is uploading, go ahead to install the dependencies below\n",
"**This will a few minutes. Wait until it says \"DONE\" to move on.** \n",
"You can ignore \"warnings.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9649a02c-fb2b-44f1-842d-d1662fa5c7cd",
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"!python -m pip install --upgrade pip\n",
"\n",
"!pip install requests==2.25.1\n",
"!pip install -U -I torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url \"https://download.pytorch.org/whl/cu117\"\n",
"!pip install transformers==4.25.1\n",
"!pip install -U diffusers[torch]\n",
"\n",
"!pip install pynvml==11.4.1\n",
"!pip install bitsandbytes==0.35.0\n",
"!pip install ftfy==6.1.1\n",
"!pip install aiohttp==3.8.3\n",
"!pip install \"tensorboard>=2.11.0\"\n",
"!pip install protobuf==3.20.2\n",
"!pip install wandb==0.13.6\n",
"!pip install colorama==0.4.5\n",
"!pip install -U triton\n",
"!pip install --pre -U xformers\n",
" \n",
"print(\"DONE\")"
]
},
{
"cell_type": "markdown",
"id": "0889cec2-241e-4323-8463-23bd41ece7a3",
"metadata": {},
"source": [
"## RESTART (not reset) your pod now\n",
"The A1111 web ui will no longer load, and we will free up the rest of that VRAM. \n",
"\n",
"**_After restarting, reload_** this page and head on over to [EveryDream2trainer/Train_JupyterLab.ipynb](EveryDream2trainer/Train_JupyterLab.ipynb) to start training!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8ba508f-7cf4-4f41-9d4d-2cf9975e6774",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
},
"vscode": {
"interpreter": {
"hash": "2e677f113ff5b533036843965d6e18980b635d0aedc1c5cebd058006c5afc92a"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}