minor colab cleanup
This commit is contained in:
parent
f58c305cf2
commit
e4ed5ff063
|
@ -1,13 +1,14 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"colab_type": "text",
|
||||
"id": "view-in-github"
|
||||
},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/nawnie/EveryDream2trainer/blob/main/Train_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
"<a href=\"https://colab.research.google.com/github/victorchall/EveryDream2trainer/blob/main/Train_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -61,6 +62,7 @@
|
|||
"outputs": [],
|
||||
"source": [
|
||||
"#@title Optional connect Gdrive\n",
|
||||
"#@markdown # but strongly recommended\n",
|
||||
"#@markdown This will let you put all your training data and checkpoints directly on your drive. Much faster/easier to continue later, less setup time.\n",
|
||||
"\n",
|
||||
"#@markdown Creates /content/drive/MyDrive/everydreamlogs/ckpt\n",
|
||||
|
@ -80,6 +82,7 @@
|
|||
"outputs": [],
|
||||
"source": [
|
||||
"#@markdown # Install Dependencies\n",
|
||||
"#@markdown This will take a couple minutes, be patient and watch the output for \"DONE!\"\n",
|
||||
"from IPython.display import clear_output\n",
|
||||
"from subprocess import getoutput\n",
|
||||
"s = getoutput('nvidia-smi')\n",
|
||||
|
@ -159,7 +162,9 @@
|
|||
"import os\n",
|
||||
"#@title Setup conversion\n",
|
||||
"\n",
|
||||
"#@markdown If you already did this once with Gdrive connected, you can skip this step as the cached copy is on your gdrive. If you are not sure, look in your Gdrive for `logs/ckpt` and see if you have a folder with the `save_name` below.\n",
|
||||
"#@markdown **If you already did this once with Gdrive connected, you can skip this step as the cached copy is on your gdrive.** \n",
|
||||
"# \n",
|
||||
"# If you are not sure, look in your Gdrive for `everydreamlogs/ckpt` and see if you have a folder with the `save_name` below.\n",
|
||||
"\n",
|
||||
"#@markdown Pick the `model_type` in the dropdown. This is the model type that you are converting and you downloaded above. This is important as it will determine the model architecture and the correct settings to use.\n",
|
||||
"\n",
|
||||
|
@ -249,19 +254,19 @@
|
|||
"#@markdown # Run Everydream 2\n",
|
||||
"#@markdown If you want to use a .json config or upload your own, skip this cell and run the cell below instead\n",
|
||||
"\n",
|
||||
"#@markdown * Save logs and output ckpts to Gdrive (strongly suggested)\n",
|
||||
"Save_to_Gdrive = True #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * Use resume to contnue training you just ran\n",
|
||||
"#@markdown * Use resume to contnue training you just ran, will also find latest diffusers log in your Gdrive to continue.\n",
|
||||
"resume = False #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * Checkpointing Saves Vram to allow larger batch sizes minor slow down on a single batch size but will can allow room for a higher traning resolution\n",
|
||||
"#@markdown * Checkpointing Saves Vram to allow larger batch sizes minor slow down on a single batch size but will can allow room for a higher traning resolution (suggested on Colab Free tier, turn off for A100)\n",
|
||||
"Gradient_checkpointing = True #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * Xformers saves ram and offers a great speed up\n",
|
||||
"Disable_Xformers = False #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * best to just read this if interested in shufflng tags /content/EveryDream2trainer/doc/SHUFFLING_TAGS.md\n",
|
||||
"Disable_Xformers = False\n",
|
||||
"#@markdown * Tag shuffling, mainly for booru training. Best to just read this if interested in shufflng tags /content/EveryDream2trainer/doc/SHUFFLING_TAGS.md\n",
|
||||
"shuffle_tags = False #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * you can stop the text encoder to attempt to reduce overfitting when resuming an unfinished model\n",
|
||||
"#@markdown * You can turn off the text encoder training (generally not suggested)\n",
|
||||
"Disable_text_Encoder= False #@param{type:\"boolean\"}\n",
|
||||
"#@markdown * Name your project so you can find it in your logs\n",
|
||||
"Project_Name = \"my_project\" #@param{type: 'string'}\n",
|
||||
"Max_Epochs = 100 #@param {type:\"slider\", min:0, max:200, step:5}\n",
|
||||
"\n",
|
||||
"#@markdown * The learning rate affects how much \"training\" is done on the model per training step. It is a very careful balance to select a value that will learn your data. See Advanced Tweaking for more info. Once you have started, the learning rate is a good first knob to turn as you move into more advanced tweaking.\n",
|
||||
"\n",
|
||||
|
@ -269,12 +274,12 @@
|
|||
"\n",
|
||||
"#@markdown * A learning rate scheduler can change your learning rate as training progresses.\n",
|
||||
"\n",
|
||||
"#@markdown * The constant scheduler is the default and keeps your LR set to the value you set in the command line. That's really it for constant! I recommend sticking with it until you are comfortable with general training.\n",
|
||||
"#@markdown I recommend sticking with constant until you are comfortable with general training. \n",
|
||||
"\n",
|
||||
"Schedule = \"constant\" #@param [\"constant\", \"polynomial\", \"linear\", \"cosine\"] {allow-input: true}\n",
|
||||
"\n",
|
||||
"#@markdown * Resolution to train at (recommend 512). Higher resolution will require lower batch size (below).\n",
|
||||
"Resolution = 512#@param {type:\"slider\", min:256, max:768, step:64}\n",
|
||||
"Resolution = 512 #@param {type:\"slider\", min:256, max:768, step:64}\n",
|
||||
"\n",
|
||||
"#@markdown * Batch size is also another \"hyperparamter\" of itself and there are tradeoffs. It may not always be best to use the highest batch size possible. Once of the primary reasons to change it is if you get \"CUDA out of memory\" errors where lowering the value may help.\n",
|
||||
"\n",
|
||||
|
@ -283,20 +288,26 @@
|
|||
"Batch_Size = 4 #@param{type: 'number'}\n",
|
||||
"\n",
|
||||
"#@markdown * Gradient accumulation is sort of like a virtual batch size increase use this to increase batch size with out increasing vram usage\n",
|
||||
"#@markdown * 1 or 5 steps will take the same vram as a batch of 1\n",
|
||||
"#@markdown * in colab free teir you can expect the fastest proformance from a batch of 4 and a step of 2 giving us a total batch size of 8 at 512 resolution \n",
|
||||
"#@markdown * Increasing this will not have much impact on VRAM use.\n",
|
||||
"#@markdown * In colab free teir you can expect the fastest proformance from a batch of 4 and a gradient step of 2 giving us a total batch size of 8 at 512 resolution \n",
|
||||
"#@markdown * Due to bucketng you may need to decresse batch size to 3\n",
|
||||
"#@markdown * Remember fast doesn't always mean better\n",
|
||||
"#@markdown * Remember more gradient accumulation (or batch size) doesn't automatically mean better\n",
|
||||
"\n",
|
||||
"Gradient_steps = 1 #@param{type:\"slider\", min:1, max:10, step:1}\n",
|
||||
"Dataset_Location = \"/content/drive/MyDrive/training_samples\" #@param {type:\"string\"}\n",
|
||||
"dataset = Dataset_Location\n",
|
||||
"model = save_name\n",
|
||||
"\n",
|
||||
"#@markdown * Max Epochs to train for, this defines how many total times all your training data is used.\n",
|
||||
"\n",
|
||||
"Max_Epochs = 100 #@param {type:\"slider\", min:0, max:200, step:5}\n",
|
||||
"\n",
|
||||
"#@markdown * How often to save checkpoints.\n",
|
||||
"Save_every_N_epoch = 20 #@param{type:\"integer\"}\n",
|
||||
"\n",
|
||||
"#@markdown You can set your own sample prompts by adding them, one line at a time, to sample_prompts.txt.\n",
|
||||
"#@markdown * Test sample generation steps, how often to generate samples during training.\n",
|
||||
"\n",
|
||||
"#@markdown You can set your own sample prompts by adding them, one line at a time, to `/content/EveryDream2trainer/sample_prompts.txt`. If left empty, it will use the captions from your training images.\n",
|
||||
"\n",
|
||||
"Steps_between_samples = 300 #@param{type:\"integer\"}\n",
|
||||
"\n",
|
||||
|
|
Loading…
Reference in New Issue