.. |
custom_modeling
|
fix(server): llama v2 GPTQ (#648)
|
2023-07-20 15:02:54 +02:00 |
__init__.py
|
feat(server): flash attention v2 (#624)
|
2023-07-18 16:21:18 +02:00 |
bloom.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
causal_lm.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
flash_causal_lm.py
|
feat(server): auto max_batch_total_tokens for flash att models (#630)
|
2023-07-19 09:31:25 +02:00 |
flash_llama.py
|
fix(server): fix llamav2 config (#635)
|
2023-07-18 18:49:42 +02:00 |
flash_neox.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
flash_rw.py
|
fix(server): Fixing RW code (it's remote code so the Arch checking doesn't work to see which weights to keep). (#579)
|
2023-07-12 09:51:34 +02:00 |
flash_santacoder.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
galactica.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
gpt_neox.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
model.py
|
feat(server): auto max_batch_total_tokens for flash att models (#630)
|
2023-07-19 09:31:25 +02:00 |
mpt.py
|
feat(server): use latest flash attention commit (#543)
|
2023-07-04 20:23:55 +02:00 |
opt.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
rw.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
santacoder.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
seq2seq_lm.py
|
feat: Add the option to force another dtype than `f16`. (#513)
|
2023-06-30 20:30:09 +02:00 |
t5.py
|
fix(server): T5 weights names. (#582)
|
2023-07-12 10:01:42 +02:00 |
types.py
|
feat(server): support vectorized warpers in flash causal lm (#317)
|
2023-05-26 12:30:27 +02:00 |