Update PLUGINS.md

update plugin docs
This commit is contained in:
Victor Hall 2023-12-20 16:46:44 -05:00 committed by GitHub
parent 1119c2130f
commit 7a81182220
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 18 additions and 1 deletions

View File

@ -1,6 +1,12 @@
# Plugin support
This is a very early and evolving feature, but users who have a need to extend behavior can now do so with plugin loading.
This is a very early and evolving feature, but users who have a need to extend behavior can now do so with plugin loading and without having to edit the main training software.
This allows developers to experiment without having to manage branches, or maintain very custom or narrow-use-case behaviors that are not appropriate with which to clutter the primary software.
Not everything is necessarily possible or convenient with this plugin system, but it should handle a substantial number of experimental, unproven, or narrow-use-case-specific functionality. For instance, one could invent nearly an infinite number of different ways to shuffle captions, but adding dozens of arguments to the main training script for these is simply inappropriate and leads to cluttered code and user confusion. These instead should be implemented as plugins.
Plugins are also a good entry point to people wanting to get their feet wet making changes. Often the context is small enough that a tool like ChatGPT or your own local LLM can write these for you if you can write reasonable requirements.
## Plugin creation
@ -39,3 +45,14 @@ Could be useful for things like customized shuffling algorithms, word replacemen
#### transform_pil_image(self, img:Image)
Could be useful for things like color grading, gamma adjustment, HSL modifications, etc. Note that AFTER this function runs the image is converted to numpy format and normalized (std_dev=0.5, norm=0.5) per the reference implementation in Stable Diffusion, so normalization is wasted compute. From prior experimentation, all adjustments to this normalization scheme degrade output of the model, thus are a waste of time and have been hardcoded. Gamma or curve adjustments are still potentially useful, as are hue and saturation changes.
## Adding hooks
Additional hooks may be added to the core trainer to allow plugins to be run at certain points in training or to transform certain things during training. Typically the plugin running itself is not a performance concern so adding hooks by itself is not going to cause problems.
PluginRunner is the class that loads and manages all the loaded plugins and calls the hook for each of them at runtime. The `plugin_runner` instance of this class is created in the main trainer script, and you may need to inject it elsewhere depending on what context is required for your hook exeuction.
To add a new hook:
1. Edit BasePlugin class to add the hook function. Define the function and implemenet it as a no-op using `pass` or simply returning the thing to be transformed with no transformation.
2. Edit PluginRunner class to add the function that will loop all the plugins and call the hook function defined in step 1.
3. Edit the main training software to call `plugin_runner.your_runner_loop_fn(...)` as defined in step 2.