AUTOMATIC1111
9bcfb92a00
rename logging from textual inversion to not confuse it with global logging module
2024-04-21 07:41:28 +03:00
wangshuai09
ba66cf8d69
update
2024-02-22 20:17:10 +08:00
AUTOMATIC1111
4c4d7dd01f
fix whitespace for #13084
2023-09-09 09:15:09 +03:00
AngelBottomless
de5bb4ca88
Fix #13080 - Hypernetwork/TI preview generation
...
Fixes sampler name reference
Same patch will be done for TI.
2023-09-05 22:35:17 +09:00
AUTOMATIC1111
f0c1063a70
resolve some of circular import issues for kohaku
2023-08-04 09:13:46 +03:00
AUTOMATIC1111
ac4ccfa136
get attention optimizations to work
2023-07-13 09:30:33 +03:00
Aarni Koskela
44c27ebc73
Use closing() with processing classes everywhere
...
Follows up on #11569
2023-07-10 20:08:23 +03:00
Aarni Koskela
ba70a220e3
Remove a bunch of unused/vestigial code
...
As found by Vulture and some eyes
2023-06-05 22:43:57 +03:00
AUTOMATIC
05933840f0
rename print_error to report, use it with together with package name
2023-05-31 19:56:37 +03:00
Aarni Koskela
00dfe27f59
Add & use modules.errors.print_error where currently printing exception info by hand
2023-05-29 09:17:30 +03:00
Aarni Koskela
49a55b410b
Autofix Ruff W (not W605) (mostly whitespace)
2023-05-11 20:29:11 +03:00
AUTOMATIC
a5121e7a06
fixes for B007
2023-05-10 11:37:18 +03:00
AUTOMATIC
028d3f6425
ruff auto fixes
2023-05-10 11:05:02 +03:00
AUTOMATIC
f741a98bac
imports cleanup for ruff
2023-05-10 08:43:42 +03:00
AUTOMATIC
1b63afbedc
sort hypernetworks and checkpoints by name
2023-03-28 20:03:57 +03:00
AUTOMATIC1111
dfb3b8f398
Merge branch 'master' into weighted-learning
2023-02-19 12:41:29 +03:00
Shondoit
edb10092de
Add ability to choose using weighted loss or not
2023-02-15 10:03:59 +01:00
Shondoit
bc50936745
Call weighted_forward during training
2023-02-15 10:03:59 +01:00
brkirch
4738486d8f
Support for hypernetworks with --upcast-sampling
2023-02-06 18:10:55 -05:00
AUTOMATIC
81823407d9
add --no-hashing
2023-02-04 11:38:56 +03:00
AUTOMATIC
78f59a4e01
enable compact view for train tab
...
prevent previews from ruining hypernetwork training
2023-01-22 00:02:51 +03:00
AUTOMATIC
40ff6db532
extra networks UI
...
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
AUTOMATIC
924e222004
add option to show/hide warnings
...
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
aria1th
13445738d9
Fix tensorboard related functions
2023-01-16 03:02:54 +09:00
aria1th
598f7fcd84
Fix loss_dict problem
2023-01-16 02:46:21 +09:00
AngelBottomless
16f410893e
fix missing 'mean loss' for tensorboard integration
2023-01-16 02:08:47 +09:00
AUTOMATIC
d8b90ac121
big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other
2023-01-15 18:51:04 +03:00
AUTOMATIC
f9ac3352cb
change hypernets to use sha256 hashes
2023-01-14 10:25:37 +03:00
AUTOMATIC
a95f135308
change hash to sha256
2023-01-14 09:56:59 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard
2023-01-13 14:57:38 +03:00
Vladimir Mandic
3f43d8a966
set descriptions
2023-01-11 10:28:55 -05:00
aria1th
a4a5475cfa
Variable dropout rate
...
Implements variable dropout rate from #4549
Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training.
Changes function name to match torch.nn.module standard
Fixes RNG reset issue when generating previews by restoring RNG state
2023-01-10 14:56:57 +09:00
AUTOMATIC
1fbb6f9ebe
make a dropdown for prompt template selection
2023-01-09 23:35:40 +03:00
dan
72497895b9
Move batchsize check
2023-01-08 02:57:36 +08:00
dan
669fb18d52
Add checkbox for variable training dims
2023-01-08 02:31:40 +08:00
AUTOMATIC
683287d87f
rework saving training params to file #6372
2023-01-06 08:52:06 +03:00
timntorres
b6bab2f052
Include model in log file. Exclude directory.
2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4
Clean up ti, add same behavior to hypernetwork.
2023-01-05 08:14:38 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping
2023-01-04 19:56:35 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules
2023-01-03 10:34:51 -05:00
AUTOMATIC1111
b12de850ae
Merge pull request #5992 from yuvalabou/F541
...
Fix F541: f-string without any placeholders
2022-12-25 09:16:08 +03:00
Vladimir Mandic
5f1dfbbc95
implement train api
2022-12-24 18:02:22 -05:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix
2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd
Use devices.autocast instead of torch.autocast
2022-11-30 10:33:42 -05:00
flamelaw
1bd57cc979
last_layer_dropout default to False
2022-11-23 20:21:52 +09:00
flamelaw
d2c97fc3fe
fix dropout, implement train/eval mode
2022-11-23 20:00:00 +09:00
flamelaw
89d8ecff09
small fixes
2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4
fix pin_memory with different latent sampling method
2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3
Gradient accumulation, autocast fix, new latent sampling method, etc
2022-11-20 12:35:26 +09:00