awq/quantize
|
feat: format code (#1070)
|
2023-09-27 12:22:09 +02:00 |
gptq
|
fix: fix gpt-q with groupsize = -1 (#1358)
|
2023-12-18 16:07:05 +01:00 |
convert.py
|
fit for baichuan models (#981)
|
2023-09-08 16:51:34 +02:00 |
dist.py
|
feat: add cuda memory fraction (#659)
|
2023-07-24 11:43:58 +02:00 |
hub.py
|
fix: fix offline (#1341) (#1347)
|
2023-12-18 10:20:08 +01:00 |
import_utils.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
layers.py
|
feat: add quant to mixtral (#1337)
|
2023-12-12 17:55:03 +01:00 |
medusa.py
|
chore: formatting
|
2023-12-11 14:49:52 +01:00 |
paged_attention.py
|
chore: formatting
|
2023-12-11 14:49:52 +01:00 |
peft.py
|
chore: formatting
|
2023-12-11 14:49:52 +01:00 |
speculate.py
|
chore: formatting
|
2023-12-11 14:49:52 +01:00 |
watermark.py
|
Fixing watermark. (#851)
|
2023-08-16 07:17:26 +02:00 |
weights.py
|
fix: fix gpt-q with groupsize = -1 (#1358)
|
2023-12-18 16:07:05 +01:00 |