colab default tweaks and add wandb
This commit is contained in:
parent
b202198e3a
commit
99900d4980
|
@ -3,8 +3,8 @@
|
|||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "view-in-github",
|
||||
"colab_type": "text"
|
||||
"colab_type": "text",
|
||||
"id": "view-in-github"
|
||||
},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/nawnie/EveryDream2trainer/blob/main/Train_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
|
@ -291,17 +291,16 @@
|
|||
"#@markdown * Resolution to train at (recommend 512). Higher resolution will require lower batch size (below).\n",
|
||||
"Resolution = 512 #@param {type:\"slider\", min:256, max:768, step:64}\n",
|
||||
"\n",
|
||||
"#@markdown * Batch size is also another \"hyperparamter\" of itself and there are tradeoffs. It may not always be best to use the highest batch size possible. Once of the primary reasons to change it is if you get \"CUDA out of memory\" errors where lowering the value may help.\n",
|
||||
"#@markdown * Batch size is also another \"hyperparameter\" of itself and there are tradeoffs. It may not always be best to use the highest batch size possible. Once of the primary reasons to change it is if you get \"CUDA out of memory\" errors where lowering the value may help.\n",
|
||||
"\n",
|
||||
"#@markdown * Batch size impacts VRAM use. 4 should work on SD1.x models and 3 for SD2.x models at 512 resolution. Lower this if you get CUDA out of memory errors.\n",
|
||||
"#@markdown * Batch size impacts VRAM use. 8 should work on SD1.x models and 5 for SD2.x models at 512 resolution. Lower this if you get CUDA out of memory errors. You can check resources on your instance and watch the GPU RAM.\n",
|
||||
"\n",
|
||||
"Batch_Size = 4 #@param{type: 'number'}\n",
|
||||
"Batch_Size = 6 #@param{type: 'number'}\n",
|
||||
"\n",
|
||||
"#@markdown * Gradient accumulation is sort of like a virtual batch size increase use this to increase batch size with out increasing vram usage\n",
|
||||
"#@markdown * Increasing this will not have much impact on VRAM use.\n",
|
||||
"#@markdown * In colab free teir you can expect the fastest proformance from a batch of 4 and a gradient step of 2 giving us a total batch size of 8 at 512 resolution \n",
|
||||
"#@markdown * Due to bucketng you may need to decresse batch size to 3\n",
|
||||
"#@markdown * Remember more gradient accumulation (or batch size) doesn't automatically mean better\n",
|
||||
"#@markdown Increasing from 1 to 2 will have a minor impact on vram use, but more beyond that will not.\n",
|
||||
"#@markdown In colab free teir you can expect the fastest proformance from a batch of 8 and a gradient step of 1\n",
|
||||
"#@markdown This is mostly for use if you are training higher resolution on free tier and cannot increase batch size.\n",
|
||||
"\n",
|
||||
"Gradient_steps = 1 #@param{type:\"slider\", min:1, max:10, step:1}\n",
|
||||
"\n",
|
||||
|
@ -310,7 +309,7 @@
|
|||
"dataset = Dataset_Location\n",
|
||||
"model = save_name\n",
|
||||
"\n",
|
||||
"#@markdown * Max Epochs to train for, this defines how many total times all your training data is used.\n",
|
||||
"#@markdown * Max Epochs to train for, this defines how many total times all your training data is used. Default of 100 is a good start if you are training ~30-40 images of one subject. If you have 100 images, you can reduce this to 40-50 and so forth.\n",
|
||||
"\n",
|
||||
"Max_Epochs = 100 #@param {type:\"slider\", min:0, max:200, step:5}\n",
|
||||
"\n",
|
||||
|
@ -324,8 +323,17 @@
|
|||
"#@markdown Use the steps_between_samples to set how often the samples are generated.\n",
|
||||
"Steps_between_samples = 300 #@param{type:\"integer\"}\n",
|
||||
"\n",
|
||||
"#@markdown *Weights and biases token. \n",
|
||||
"\n",
|
||||
"# #@markdown Paste your token here if you have an account so you can use it to track your training progress. If you don't have an account, you can create one for free at https://wandb.ai/site. Log will use your project name from above.\n",
|
||||
"wandb_token = '' #@param{type:\"string\"}\n",
|
||||
"\n",
|
||||
"#@markdown * That's it! Run the cell!\n",
|
||||
"\n",
|
||||
"if wandb_token:\n",
|
||||
" !wandb login $wandb_token\n",
|
||||
" wandb_settings = \"--wandb\"\n",
|
||||
"\n",
|
||||
"Drive=\"\"\n",
|
||||
"if Save_to_Gdrive:\n",
|
||||
" Drive = \"--logdir /content/drive/MyDrive/everydreamlogs --save_ckpt_dir /content/drive/MyDrive/everydreamlogs/ckpt\"\n",
|
||||
|
@ -360,6 +368,7 @@
|
|||
" $shuffle \\\n",
|
||||
" $Drive \\\n",
|
||||
" $DX \\\n",
|
||||
" $wandb_settings \\\n",
|
||||
" --amp \\\n",
|
||||
" --batch_size $Batch_Size \\\n",
|
||||
" --grad_accum $Gradient_steps \\\n",
|
||||
|
@ -385,8 +394,8 @@
|
|||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Iuoa_1B9jRGU",
|
||||
"cellView": "form"
|
||||
"cellView": "form",
|
||||
"id": "Iuoa_1B9jRGU"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
@ -411,9 +420,20 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "8HmIWtODuE6p"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#@title Test your Diffusers\n",
|
||||
"#@markdown Path to the diffusers that was trained\n",
|
||||
"\n",
|
||||
"#@markdown You can look in the file drawer on the left /content/drive/MyDrive/everydreamlogs and click the three dots to copy the path\n",
|
||||
"\n",
|
||||
"#@markdown ex. /content/drive/MyDrive/everydreamlogs/my_project_20230126-023804/ckpts/interrupted-gs86\n",
|
||||
"\n",
|
||||
"diffusers_path=\"\" #@param{type:\"string\"}\n",
|
||||
"DF=diffusers_path\n",
|
||||
"PROMPT= \"a photo of an astronaut on the moon\"#@param{type:\"string\"}\n",
|
||||
|
@ -430,20 +450,14 @@
|
|||
" --prompt \"$PROMPT\" \\\n",
|
||||
" --steps $Steps \\\n",
|
||||
" --cfg_scale $cfg "
|
||||
],
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "8HmIWtODuE6p"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"include_colab_link": true
|
||||
"include_colab_link": true,
|
||||
"provenance": []
|
||||
},
|
||||
"gpuClass": "standard",
|
||||
"kernelspec": {
|
||||
|
|
Loading…
Reference in New Issue