Victor Hall
4c3de29d43
kosmos update and docs for loss_scale.txt
2023-11-03 13:50:49 -04:00
Victor Hall
30b063dfec
Merge pull request #235 from damian0815/feat_negative_loss
...
add loss_scale.txt
2023-11-03 13:36:36 -04:00
Victor Hall
164f635c6f
dtype added for kosmos-2 captions
2023-11-02 23:24:00 -04:00
Victor Hall
24a692801a
update kosmos2 caption to use cuda by default
2023-11-02 22:53:10 -04:00
Victor Hall
4fb64fed66
updating reqs
2023-11-02 21:54:33 -04:00
Victor Hall
3150f7d299
add kosmo-2 caption script and update diffusers and transformers to latest
2023-11-02 21:54:33 -04:00
Damian Stewart
86aaf1c4d7
fix big when loss_scale.txt contains 0
2023-11-01 19:00:08 +01:00
Damian Stewart
c485d4ea60
fix device mismatch with loss_scale
2023-11-01 09:29:41 +01:00
Damian Stewart
a7343ad190
fix scale batch calculation
2023-11-01 08:11:42 +01:00
Damian Stewart
da731268b2
put a file loss_scale.txt containing a float in a training folder to apply loss scale (eg -1 for negative examples)
2023-10-31 10:06:21 +01:00
Victor Hall
2f2dd4c1f2
Merge pull request #229 from luisgabrielroldan/fix-undersize-warning
...
Fix Inaccuracies in Image Size Validation and Warning Display
2023-10-07 15:25:25 -04:00
Victor Hall
387616083c
Merge pull request #228 from luisgabrielroldan/fix-save-ckpt-dir
...
Fix save_ckpt_dir not being honored when saving model
2023-10-07 14:34:52 -04:00
Victor Hall
81807b6cab
Merge pull request #227 from timlawrenz/patch-1
...
Update CAPTION.md
2023-10-07 14:32:16 -04:00
Gabriel Roldán
60e10867bc
Fix undersize warning
2023-10-05 01:48:09 -03:00
Gabriel Roldán
f301677881
Fix save_ckpt_dir not being when saving model
2023-10-01 00:09:24 -03:00
Tim Lawrenz
8f9d23cf3b
Update CAPTION.md
...
minor typo
2023-09-27 08:17:01 -04:00
Victor Hall
e8e4f0c2ea
Merge pull request #214 from luisgabrielroldan/keep_tags
...
Add --keep_tags to keep first N tags fixed on shuffle
2023-09-25 13:10:21 -04:00
Victor Hall
85b5bcc6e4
Update Train_Colab.ipynb
2023-09-25 12:34:13 -04:00
Victor Hall
3a2485eac5
fix save_every_n_epoch bug in colab
2023-09-25 12:15:29 -04:00
Victor Hall
8064c29066
add safetensors to colab req
2023-09-25 12:01:35 -04:00
Victor Hall
9e820c8cce
doc for lion8bit and prodigy
2023-09-22 16:02:22 -04:00
Victor Hall
78a4c13c4c
add prodigyopt requirements
2023-09-22 13:46:25 -04:00
Victor Hall
5444abfd7b
finalize lion8bit and add prodigy
2023-09-22 12:15:32 -04:00
Victor Hall
a9c98f5866
bugged flag
2023-09-22 10:16:56 -04:00
Victor Hall
166c2e74e1
off by one on last epoch save
2023-09-21 21:29:36 -04:00
Victor Hall
ada6037463
add bnb lion8bit support
2023-09-21 13:48:40 -04:00
Victor Hall
a47d65799f
early work on pinned image tensor
2023-09-21 13:48:40 -04:00
Gabriel Roldan
f2d5a40f72
Add --keep_tags docs and fix some typos
2023-09-20 19:57:04 -03:00
Gabriel Roldan
99a0431d0f
Ignore negative keep_tags values
2023-09-20 19:53:31 -03:00
Gabriel Roldán
43984f2ad3
Add --keep_tags to keep first N tags fixed on shuffle
2023-09-20 19:53:30 -03:00
Victor Hall
6c8d15daab
Merge pull request #209 from damian0815/feat_rolling_save
...
Feature: Rolling save ckpt
2023-09-20 16:32:48 -04:00
Victor Hall
09aa13c3dd
Merge branch 'main' into feat_rolling_save
2023-09-20 16:32:37 -04:00
Victor Hall
2dff3aa8d1
ema update
2023-09-18 16:13:22 -04:00
Victor Hall
303c8312e3
update ema sample args again
2023-09-18 16:12:51 -04:00
Victor Hall
29bab698a3
minor update to ema docs
2023-09-18 15:07:39 -04:00
Victor Hall
2f52832209
fix trainSD21.json and advanced tweaking ema param names
2023-09-18 14:54:43 -04:00
Victor Hall
e4d93225f7
fix train21.json ema param names
2023-09-18 14:53:17 -04:00
Damian Stewart
a68ebe3658
fix typo
2023-09-17 20:17:54 +02:00
Damian Stewart
3579bec540
cleanup
2023-09-17 20:17:33 +02:00
Damian Stewart
74c21602fc
default every epoch not 3x per epoch
2023-09-17 20:16:29 +02:00
Damian Stewart
3fddef3698
put back make_save_path and fix error in plugin runner
2023-09-17 20:16:26 +02:00
Victor Hall
fa5b38e26b
some minor updates to ema
2023-09-12 21:37:27 -04:00
Victor Hall
49de395df1
Merge pull request #224 from a-l-e-x-d-s-9/main
...
EMA decay, min-SNR-gamma, settings loading every epoch, zero terminal SNR separation from ZFN.
2023-09-12 15:36:27 -04:00
alexds9
7259ce873b
1. Samples format change to make sure global step appear before "ema" indication.
2023-09-11 00:13:26 +03:00
alexds9
39b3082bf4
1. Making sure to release VRAM in samples.
2023-09-10 22:42:01 +03:00
alexds9
d2d493c911
1. New parameters added to train.json and trainSD21.json - disabled by default.
...
2. Description added to ADVANCED_TWEAKING.md
2023-09-10 20:06:50 +03:00
alexds9
5b1760fff2
1. Added an argument ema_decay_resume_model to load EMA model - it's loaded alongside main model, instead of copying normal model. It's optional, without loaded EMA model, it will copy the regular model to me the first EMA model, just like before.
...
2. Fixed findlast option for regular models not to load EMA models by default.
3. findlast can be used to load EMA model too when used with ema_decay_resume_model.
4. Added ema_device variable to store the device in torch type.
5. Cleaned prints and comments.
2023-09-07 19:53:20 +03:00
alexds9
cf4a082e11
1. Fix to EMA samples arguments not respecting False value.
2023-09-06 23:04:12 +03:00
alexds9
5bcf9407f0
1. Improved EMA support: samples generation with arguments EMA/NOT-EMA, saving checkpoints and diffusers for both, ema_decay_target implemented.
...
2. enable_zero_terminal_snr separated from zero_frequency_noise_ratio.
2023-09-06 22:37:10 +03:00
alexds9
23df727a1f
Added support for:
...
1. EMA decay. Using EMA decay model, it is updated every ema_decay_interval by (1 - ema_decay_rate), it can be stored on CPU to save VRAM. Only EMA model is saved now.
2. min_snr_gamma - improve converging speed, more info: https://arxiv.org/abs/2303.09556
3. load_settings_every_epoch - Will load 'train.json' at start of every epoch.
2023-09-06 13:38:52 +03:00