Anthony Mercurio
1a83e470a4
Merge pull request #65 from SY573M404/patch-1
...
Fix typo in diffusers_trainer.py
2023-02-16 13:02:49 -07:00
SY573M_404
30fea8fdfa
Fix typo in diffusers_trainer.py
...
One description was used for two CLI flags
2023-02-11 17:28:14 +05:00
Anthony Mercurio
27d301c5b9
Merge pull request #58 from harubaru/text-encoder-updates
...
FEAT: Text Encoder (CLIP) Training
2022-12-03 09:24:43 -07:00
cafeai
29e7df519b
Fix Gradient Checkpointing
2022-12-03 21:24:38 +09:00
cafeai
31dd4f6433
Update Samples
2022-12-03 20:49:53 +09:00
cafeai
bf264d0ff0
Update Save Checkpoint
2022-12-03 20:06:46 +09:00
cafeai
34715bcc97
Access Underlying Model
2022-12-03 19:57:00 +09:00
cafeai
3cefb57fc6
fp32 Update
2022-12-03 19:42:50 +09:00
cafeai
18ff256be5
Implement Text Encoder Training
2022-12-03 12:47:40 +09:00
Anthony Mercurio
c709257bec
Merge pull request #56 from harubaru/extended-mode-fix
...
Extended Mode Typo Fix and Seed Update
2022-12-01 20:47:18 -07:00
cafeai
c4b17e41f1
Add np seed
...
Just in case!
2022-12-02 01:49:12 +09:00
cafeai
d225cd57e7
Fix Seed
2022-12-02 01:41:22 +09:00
cafeai
b96eabfe2a
Fix
2022-12-02 01:24:00 +09:00
Anthony Mercurio
138cb7bbed
Merge pull request #54 from harubaru/extended-mode
...
Adding Extended Mode Functionality
2022-11-30 22:55:30 -07:00
Anthony Mercurio
55a555850d
Remove dangling barrier
2022-11-30 22:52:48 -07:00
Anthony Mercurio
7102d313ac
Synchronize ranks for DDP
2022-11-30 19:05:02 -07:00
cafeai
1074bd6f3b
Bug Fix
2022-12-01 05:10:21 +09:00
cafeai
fb75cbe029
Provide Tokens for Inference
2022-12-01 04:46:01 +09:00
cafeai
981c6ca41a
Cleanup
2022-12-01 04:32:10 +09:00
cafeai
ee281badcd
Extended Mode Updates
2022-12-01 04:31:30 +09:00
Anthony Mercurio
4572617ff9
use ddp for everything
2022-11-30 10:54:30 -07:00
Anthony Mercurio
b0cec788be
Get DDP to work
2022-11-29 22:06:21 -07:00
Anthony Mercurio
8decb0bc7d
Fix distributed training
2022-11-29 18:01:17 -07:00
Anthony Mercurio
5f0a952eff
Merge pull request #48 from laksjdjf/sub
...
fix --ucg not being set.
2022-11-25 15:02:41 -07:00
laksjdjf
5787f7d080
Update diffusers_trainer.py
2022-11-21 13:57:26 +09:00
Anthony Mercurio
511ee9e6d2
Merge pull request #47 from chavinlo/patch-2
...
Move the model to device BEFORE creating the optimizer
2022-11-20 10:06:50 -05:00
Carlos Chavez
f2cfe65d09
Move the movel to device BEFORE creating the optimizer
...
>It shouldn’t matter, as the optimizer should hold the references to the parameter (even after moving them). However, the “safer” approach would be to move the model to the device first and create the optimizer afterwards.
https://discuss.pytorch.org/t/should-i-create-optimizer-after-sending-the-model-to-gpu/133418/2
https://discuss.pytorch.org/t/effect-of-calling-model-cuda-after-constructing-an-optimizer/15165
At least in my experience with hivemind, if you initialize the optimizer and move the model afterwards, it will throw errors about finding some data in CPU and other on GPU. This shouldn't affect performance or anything I believe.
2022-11-20 00:09:35 -05:00
Anthony Mercurio
1d1f4022d2
Merge pull request #45 from lopho/patch-2
...
fix wandb init mode, don't log hf token
2022-11-16 16:32:50 -05:00
lopho
9916294de1
fix wandb init mode, don't log hf token
...
correct value for mode ('enabled' is invalid)
clear hf_token passed to wandb to avoid logging it
2022-11-16 22:28:16 +01:00
Anthony Mercurio
c8eeaaf353
Merge pull request #42 from chavinlo/inference-option
...
Add options and local inference
2022-11-16 16:21:15 -05:00
Anthony Mercurio
dc5849b235
Merge branch 'main' into inference-option
2022-11-16 16:20:57 -05:00
chavinlo
a2772fc668
fixes
2022-11-16 10:55:38 -05:00
chavinlo
fed3431f03
Revert "sync trainer with main branch"
...
This reverts commit 80e2422967
.
2022-11-16 10:44:39 -05:00
Carlos Chavez
80e2422967
sync trainer with main branch
2022-11-16 10:39:20 -05:00
Anthony Mercurio
da4f4b93ab
Merge pull request #43 from Maw-Fox/staging-docfix
...
Minor fix: Documentation Consistency
2022-11-15 13:30:27 -05:00
Maw-Fox
015eeae274
Documentation consistency.
2022-11-15 10:34:55 -07:00
Anthony Mercurio
29ffbd645e
Fix noise scheduler
2022-11-15 11:08:38 -05:00
Anthony Mercurio
5be5a487b2
Merge pull request #39 from Maw-Fox/staging-migration
...
Implementation of validation/resize classes
2022-11-15 10:33:47 -05:00
Maw-Fox
6c5b2e7149
Fix of fix
2022-11-15 07:15:18 -07:00
Maw-Fox
2c18d29613
Fix from upstream merge.
2022-11-15 06:42:14 -07:00
Maw-Fox
b3b5523d85
Merge branch 'staging-migration' of https://github.com/maw-fox/waifu-diffusion into staging-migration
2022-11-14 20:15:15 -07:00
Carlos Chavez
d600078008
Add options and local inference
...
Added options to:
- Disable Inference (it consumes about 2gb of VRAM even when not active)
- Disable wandb
and:
- if no hftoken is provided it just fills it with nothing so it doesn't argues
- if wandb is not enabled, save the inference outputs to a local folder along with information about it
2022-11-14 22:08:16 -05:00
Maw-Fox
773e65f324
Merge origin:main into remote:staging-migration
2022-11-14 19:59:45 -07:00
Anthony Mercurio
5c205524e5
Merge branch 'main' into staging-migration
2022-11-14 12:31:05 -05:00
Anthony Mercurio
ae561d19f7
Merge pull request #40 from lopho/patch-1
...
Parse booleans in argument parser
2022-11-14 12:29:31 -05:00
Maw-Fox
978dd45072
Fix.
2022-11-13 08:24:40 -07:00
Maw-Fox
4943d978c1
Fix redundancies.
2022-11-13 08:22:44 -07:00
Maw-Fox
95b9407a3e
Add+config .gitignore (bring back git stage) and fix up documentation.
2022-11-12 18:48:16 -07:00
Maw-Fox
6bd6c6a4ef
Fixed/flipped help text.
2022-11-12 16:11:30 -07:00
Maw-Fox
189f621a1e
Here, let's fix this while we're at it.
2022-11-12 15:47:17 -07:00