372 lines
11 KiB
Plaintext
372 lines
11 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "view-in-github",
|
|
"colab_type": "text"
|
|
},
|
|
"source": [
|
|
"<a href=\"https://colab.research.google.com/github/nawnie/EveryDream/blob/main/EveryDream_Tools.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "uJfwih4wAVgw"
|
|
},
|
|
"source": [
|
|
"# Please read the documentation here before you start.\n",
|
|
"\n",
|
|
"I suggest reading this doc before you connect to your runtime to avoid using credits or being charged while you figure it out.\n",
|
|
"\n",
|
|
"[Auto Captioning Readme](doc/AUTO_CAPTION.md)\n",
|
|
"\n",
|
|
"This notebook requires an Nvidia GPU instance. Any will do, you don't need anything power. As low as 4GB should be fine.\n",
|
|
"\n",
|
|
"Only colab has automatic file transfers at this time. If you are using another platform, you will need to manually download your output files."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "lWGx2LuU8Q_I"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"#download repo\n",
|
|
"!git clone https://github.com/victorchall/EveryDream.git\n",
|
|
"# Set working directory\n",
|
|
"%cd EveryDream"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "RJxfSai-8pkD"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# install requirements\n",
|
|
"!pip install torch=='1.12.1+cu113' 'torchvision==0.13.1+cu113' --extra-index-url https://download.pytorch.org/whl/cu113\n",
|
|
"!pip install pandas>='1.3.5'\n",
|
|
"!git clone https://github.com/salesforce/BLIP scripts/BLIP\n",
|
|
"!pip install timm\n",
|
|
"!pip install fairscale=='0.4.4'\n",
|
|
"!pip install transformers=='4.19.2'\n",
|
|
"!pip install timm\n",
|
|
"!pip install aiofiles"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"#Extract Frames from video\n",
|
|
"\n",
|
|
"Here we will use the folder input_vid and upload in the same way we did our images"
|
|
],
|
|
"metadata": {
|
|
"id": "huQSI8Y-Bboz"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"!python /scripts/extract_video_frames.py \\\n",
|
|
"--vid_dir input_vid \\\n",
|
|
"--out_dir output/vid \\\n",
|
|
"--format png \\\n",
|
|
"--interval 10 "
|
|
],
|
|
"metadata": {
|
|
"id": "RDuBL4k8Avz-"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"Move the extracted frames to the input directory for captions"
|
|
],
|
|
"metadata": {
|
|
"id": "iqcUzcRuCTLR"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"!cp -r output/vid input"
|
|
],
|
|
"metadata": {
|
|
"id": "Uv8wAHSQAvrm"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "sbeUIVXJ-EVf"
|
|
},
|
|
"source": [
|
|
"# Upload your input images into the EveryDream/input folder\n",
|
|
"\n",
|
|
"![upload to input](https://github.com/victorchall/EveryDream/blob/main/demo/upload_images_caption.png?raw=1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "bscWH13SAVgz"
|
|
},
|
|
"source": [
|
|
"## Please read the documentation for information on the parameters\n",
|
|
"\n",
|
|
"[Auto Captioning](doc/AUTO_CAPTION.md)\n",
|
|
"\n",
|
|
"*You cannot have commented lines between uncommented lines. If you uncomment a line below, move it above any other commented lines.*\n",
|
|
"\n",
|
|
"*!python must remain the first line.*\n",
|
|
"\n",
|
|
"Default params should work fairly well."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "4TAICahl-RPn"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"!python scripts/auto_caption.py \\\n",
|
|
"--img_dir input \\\n",
|
|
"--out_dir output \\\n",
|
|
"#--format mrwho \\\n",
|
|
"#--min_length 34 \\\n",
|
|
"#--q_factor 1.3 \\\n",
|
|
"#--nucleus \\\n",
|
|
"\n",
|
|
"## mutiple files can be targeted in succession\n",
|
|
"\n",
|
|
"#!python scripts/auto_caption.py \\\n",
|
|
"#--img_dir input/subfolder \\\n",
|
|
"#--out_dir output/subfolder \\\n",
|
|
"#--format mrwho \\\n",
|
|
"#--min_length 34 \\\n",
|
|
"#--q_factor 1.3 \\\n",
|
|
"#--nucleus \\"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"# Laion Downloader\n",
|
|
"\n",
|
|
"* --laion_dir: directory with laion parquet files, default is ./laion\n",
|
|
"\n",
|
|
"* --search_text: csv of words with AND logic, ex \\\"photo,man,dog\\\"\n",
|
|
"\n",
|
|
"* --out_dir: directory to download files to, ive defaulted this to inputs so they can be captioned \n",
|
|
"\n",
|
|
"* --log_dir: directory for logs, if ommitted will not log, logs may be large!\n",
|
|
"\n",
|
|
"* --column:column to search for matches, defaults is 'TEXT', but you could use 'URL' if you wanted\",\n",
|
|
"\n",
|
|
"* --limit: max number of matching images to download, warning: may be slightly imprecise due to concurrency and http errors, defaults is 100\n",
|
|
"\n",
|
|
"* --min_hw: min height AND width of image to download, default is 512\n",
|
|
" \n",
|
|
"* --force: forces a full download of all images, even if no search is provided, USE CAUTION!\n",
|
|
"\n",
|
|
"* --parquet_skip: skips the first n parquet files on disk, useful to resume\n",
|
|
" \n",
|
|
"* --verbose: additional logging of URL and TEXT \n",
|
|
" \n",
|
|
"* --test: skips downloading, for checking filters, use with \"--verbose\"\n"
|
|
],
|
|
"metadata": {
|
|
"id": "wY2f2LkPGSVa"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"!python scripts/download_laion.py \\\n",
|
|
"--laion_dir ./laion \\\n",
|
|
"--search_text \"photo,man,dog\" \\\n",
|
|
"#--out_dir input \\\n",
|
|
"#--log_dir logs \\\n",
|
|
"#--column TEXT \\\n",
|
|
"#--limit 100 \\\n",
|
|
"#--min_hw 512 \\\n",
|
|
"#--force False \\\n",
|
|
"#--parquet_skip 0 \\\n",
|
|
"#--Verbose False \\\n",
|
|
"#--test not \\\n"
|
|
],
|
|
"metadata": {
|
|
"id": "cxw60TTmEy2C"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"Here we can take our now captioned images and replace generic terms with our subjects\n",
|
|
"\n",
|
|
"* --find: will search for a word in this case man\n",
|
|
"\n",
|
|
"* --replace: will replace our found word with in this case VictorChall\n",
|
|
"\n",
|
|
"* --append_only: this will allow us to add a tag at he end "
|
|
],
|
|
"metadata": {
|
|
"id": "EBdLelNpDjYc"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"!python scripts/filename_replace.py \\\n",
|
|
"--img_dir output \\\n",
|
|
"--find \"man\" \\\n",
|
|
"--replace \"VictorChall\""
|
|
],
|
|
"metadata": {
|
|
"id": "6Y1md3OHAvhw"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"Now we can chose to create text files based on our file names, this is usefull for images with very long discriptions or tag list, windows has a limit of 256 characters, and files will not transfer correctly to a windows program if they are longer, moving these files in a zip is fine however and causes no issues\n"
|
|
],
|
|
"metadata": {
|
|
"id": "W0MspWmXJQuc"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"!python scripts/createtxtfromfilename.py"
|
|
],
|
|
"metadata": {
|
|
"id": "BpvenvyQJr9b"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"source": [
|
|
"Compress our images "
|
|
],
|
|
"metadata": {
|
|
"id": "boVkDsiWJ_-P"
|
|
}
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"source": [
|
|
"/content/EveryDream/scripts/compress_img.py \\\n",
|
|
"--img_dir output \\\n",
|
|
"--out_dir output/compressed_images \\\n",
|
|
"--max_mp 1.5 \n",
|
|
"#--overwrite False \\\n",
|
|
"#--Quality 95 \\\n",
|
|
"#--noresize False \\\n",
|
|
"#--delete \\"
|
|
],
|
|
"metadata": {
|
|
"id": "F6QYfylhKAII"
|
|
},
|
|
"execution_count": null,
|
|
"outputs": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "HBrWnu1C_lN9"
|
|
},
|
|
"source": [
|
|
"## Download your DataSet from EveryDream/output\n",
|
|
"\n",
|
|
"If you're on a colab you can use the cell below to push your output to your Gdrive."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "ldW2sDLcAVgz"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from google.colab import drive\n",
|
|
"drive.mount('/content/drive')\n",
|
|
"\n",
|
|
"!mkdir /content/drive/MyDrive/Auto_Data_sets\n",
|
|
"!cp -r output/ /content/drive/MyDrive/Auto_Data_sets"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "B-HFqbP4AVgz"
|
|
},
|
|
"source": [
|
|
"## If not on colab/gdrive, the following will zip up your files for extraction\n",
|
|
"\n",
|
|
"You'll still need to use your runtime's own download feature to download the zip.\n",
|
|
"\n",
|
|
"![output zip](https://github.com/victorchall/EveryDream/blob/main/demo/output_zip.png?raw=1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "SVa80mrKAVg0"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install patool\n",
|
|
"\n",
|
|
"import patoolib\n",
|
|
"\n",
|
|
"!mkdir output/zip\n",
|
|
"\n",
|
|
"!zip -r output/zip/output.zip output"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"colab": {
|
|
"provenance": [],
|
|
"include_colab_link": true
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3.10.5 ('.venv': venv)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"name": "python",
|
|
"version": "3.10.5"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "faf4a6abb601e3a9195ce3e9620411ceec233a951446de834cdf28542d2d93b4"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
} |