example instead of exmaple
This commit is contained in:
Colai 2023-05-16 10:40:48 +02:00 committed by Victor Hall
parent e14973a9da
commit 4a2e0bebdd
1 changed files with 1 additions and 1 deletions

View File

@ -32,7 +32,7 @@ Using ground truth images for the general purpose of "presevation" will, instead
"Preservation" images and "training" images have no special distinction in EveryDream. All images are treated the same and the trainer does not know the difference. It is all in how you use them.
Any preservation images still need a caption of some sort. Just "person" may be sufficient, for the sake of this particular exmaple we're just trying to *simulate* Dreambooth. This can be as easy as selecting all the images, F2 rename, type `person_` (with the underscore) and press enter. Windows will append (x) to every file to make sure the filenames are unique, and EveryDream interprets the underscore as the end of the caption when present in the filename, thus all the images will be read as having a caption of simply `person`, similar to how many people train Dreambooth.
Any preservation images still need a caption of some sort. Just "person" may be sufficient, for the sake of this particular example we're just trying to *simulate* Dreambooth. This can be as easy as selecting all the images, F2 rename, type `person_` (with the underscore) and press enter. Windows will append (x) to every file to make sure the filenames are unique, and EveryDream interprets the underscore as the end of the caption when present in the filename, thus all the images will be read as having a caption of simply `person`, similar to how many people train Dreambooth.
You could also generate "person" regularization images out of any Stable Diffusion inference application or download one of the premade regularization sets, *but I find this is less than ideal*. For small training, regularization or preservation is simply not needed. For longer term training you're much better off mixing in real "ground truth" images into your data instead of generated data. "Ground truth" meaning images not generated from an AI. Training back on generated data will reinforce the errors in the model, like extra limbs, weird fingers, watermarks, etc. Using real ground truth data can actually help improve the model.