.. |
custom_kernels
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
exllama_kernels
|
MI300 compatibility (#1764)
|
2024-05-17 15:30:47 +02:00 |
exllamav2_kernels
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
marlin
|
Add support for repacking AWQ weights for GPTQ-Marlin (#2278)
|
2024-07-23 13:08:20 +02:00 |
tests
|
Improve the handling of quantized weights (#2250)
|
2024-07-19 09:37:39 +02:00 |
text_generation_server
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |
.gitignore
|
Impl simple mamba model (#1480)
|
2024-02-08 10:19:45 +01:00 |
Makefile
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-07-20 19:02:04 +02:00 |
Makefile-awq
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
Makefile-eetq
|
Upgrade EETQ (Fixes the cuda graphs). (#1729)
|
2024-04-12 08:15:28 +02:00 |
Makefile-fbgemm
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-07-20 19:02:04 +02:00 |
Makefile-flash-att
|
Hotfixing `make install`. (#2008)
|
2024-06-04 23:34:03 +02:00 |
Makefile-flash-att-v2
|
Softcapping for gemma2. (#2273)
|
2024-07-22 18:27:10 +02:00 |
Makefile-lorax-punica
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
Makefile-selective-scan
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
Makefile-vllm
|
Add support for Deepseek V2 (#2224)
|
2024-07-19 17:23:20 +02:00 |
README.md
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
fbgemm_remove_unused.patch
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-07-20 19:02:04 +02:00 |
fix_torch90a.sh
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-07-20 19:02:04 +02:00 |
poetry.lock
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |
pyproject.toml
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |
requirements_cuda.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |
requirements_intel.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |
requirements_rocm.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-07-23 17:18:54 +02:00 |