cafeai
b96eabfe2a
Fix
2022-12-02 01:24:00 +09:00
Anthony Mercurio
138cb7bbed
Merge pull request #54 from harubaru/extended-mode
...
Adding Extended Mode Functionality
2022-11-30 22:55:30 -07:00
Anthony Mercurio
55a555850d
Remove dangling barrier
2022-11-30 22:52:48 -07:00
Anthony Mercurio
7102d313ac
Synchronize ranks for DDP
2022-11-30 19:05:02 -07:00
cafeai
1074bd6f3b
Bug Fix
2022-12-01 05:10:21 +09:00
cafeai
fb75cbe029
Provide Tokens for Inference
2022-12-01 04:46:01 +09:00
cafeai
981c6ca41a
Cleanup
2022-12-01 04:32:10 +09:00
cafeai
ee281badcd
Extended Mode Updates
2022-12-01 04:31:30 +09:00
Anthony Mercurio
4572617ff9
use ddp for everything
2022-11-30 10:54:30 -07:00
Anthony Mercurio
b0cec788be
Get DDP to work
2022-11-29 22:06:21 -07:00
Anthony Mercurio
8decb0bc7d
Fix distributed training
2022-11-29 18:01:17 -07:00
laksjdjf
5787f7d080
Update diffusers_trainer.py
2022-11-21 13:57:26 +09:00
Carlos Chavez
f2cfe65d09
Move the movel to device BEFORE creating the optimizer
...
>It shouldn’t matter, as the optimizer should hold the references to the parameter (even after moving them). However, the “safer” approach would be to move the model to the device first and create the optimizer afterwards.
https://discuss.pytorch.org/t/should-i-create-optimizer-after-sending-the-model-to-gpu/133418/2
https://discuss.pytorch.org/t/effect-of-calling-model-cuda-after-constructing-an-optimizer/15165
At least in my experience with hivemind, if you initialize the optimizer and move the model afterwards, it will throw errors about finding some data in CPU and other on GPU. This shouldn't affect performance or anything I believe.
2022-11-20 00:09:35 -05:00
lopho
9916294de1
fix wandb init mode, don't log hf token
...
correct value for mode ('enabled' is invalid)
clear hf_token passed to wandb to avoid logging it
2022-11-16 22:28:16 +01:00
Anthony Mercurio
dc5849b235
Merge branch 'main' into inference-option
2022-11-16 16:20:57 -05:00
chavinlo
a2772fc668
fixes
2022-11-16 10:55:38 -05:00
chavinlo
fed3431f03
Revert "sync trainer with main branch"
...
This reverts commit 80e2422967
.
2022-11-16 10:44:39 -05:00
Carlos Chavez
80e2422967
sync trainer with main branch
2022-11-16 10:39:20 -05:00
Maw-Fox
015eeae274
Documentation consistency.
2022-11-15 10:34:55 -07:00
Anthony Mercurio
29ffbd645e
Fix noise scheduler
2022-11-15 11:08:38 -05:00
Maw-Fox
6c5b2e7149
Fix of fix
2022-11-15 07:15:18 -07:00
Maw-Fox
2c18d29613
Fix from upstream merge.
2022-11-15 06:42:14 -07:00
Carlos Chavez
d600078008
Add options and local inference
...
Added options to:
- Disable Inference (it consumes about 2gb of VRAM even when not active)
- Disable wandb
and:
- if no hftoken is provided it just fills it with nothing so it doesn't argues
- if wandb is not enabled, save the inference outputs to a local folder along with information about it
2022-11-14 22:08:16 -05:00
Maw-Fox
773e65f324
Merge origin:main into remote:staging-migration
2022-11-14 19:59:45 -07:00
Maw-Fox
95b9407a3e
Add+config .gitignore (bring back git stage) and fix up documentation.
2022-11-12 18:48:16 -07:00
Maw-Fox
6bd6c6a4ef
Fixed/flipped help text.
2022-11-12 16:11:30 -07:00
Maw-Fox
189f621a1e
Here, let's fix this while we're at it.
2022-11-12 15:47:17 -07:00
lopho
cd0910e82d
Parse booleans in argument parser
...
true, yes or 1 correspond to True, else False.
2022-11-12 11:14:48 +01:00
Maw-Fox
d1eb3ace3f
I lied.
2022-11-11 18:17:29 -07:00
Maw-Fox
6c2d5d8066
Final cleanup.
2022-11-11 18:09:09 -07:00
Maw-Fox
de221ea42e
Derp. ImageStore.__init__ already iterates fully :)
2022-11-11 17:59:32 -07:00
Maw-Fox
925eacf374
Cleanup
2022-11-11 17:50:23 -07:00
Maw-Fox
c12cbfced3
Fixed ref typo.
2022-11-11 17:43:09 -07:00
Maw-Fox
6480336d2c
Cleanup test code.
2022-11-11 17:17:50 -07:00
Maw-Fox
120d406355
Implementation of validation/resize classes.
2022-11-11 17:14:46 -07:00
Anthony Mercurio
624f0f14af
correctly set args
2022-11-11 08:25:27 -07:00
harubaru
1f5b671b67
relicense
2022-11-10 12:59:53 -07:00