.. |
custom_kernels
|
All integration tests back everywhere (too many failed CI). (#2428)
|
2024-08-16 21:19:46 +02:00 |
exllama_kernels
|
Update ROCM libs and improvements (#2579)
|
2024-09-30 10:54:32 +02:00 |
exllamav2_kernels
|
Update ROCM libs and improvements (#2579)
|
2024-09-30 10:54:32 +02:00 |
tests
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
text_generation_server
|
fix: run pre commit lints
|
2024-11-01 12:11:57 -04:00 |
.gitignore
|
Impl simple mamba model (#1480)
|
2024-02-08 10:19:45 +01:00 |
Makefile
|
Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels (#2688)
|
2024-10-25 16:40:47 +02:00 |
Makefile-awq
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
Makefile-eetq
|
Upgrade EETQ (Fixes the cuda graphs). (#1729)
|
2024-04-12 08:15:28 +02:00 |
Makefile-exllamav2
|
Upgrading exl2. (#2415)
|
2024-08-14 11:58:08 +02:00 |
Makefile-flash-att
|
Hotfixing `make install`. (#2008)
|
2024-06-04 23:34:03 +02:00 |
Makefile-flash-att-v2
|
Update ROCM libs and improvements (#2579)
|
2024-09-30 10:54:32 +02:00 |
Makefile-flashinfer
|
Prefix test - Different kind of load test to trigger prefix test bugs. (#2490)
|
2024-09-11 18:10:40 +02:00 |
Makefile-lorax-punica
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
Makefile-selective-scan
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
Makefile-vllm
|
Update ROCM libs and improvements (#2579)
|
2024-09-30 10:54:32 +02:00 |
README.md
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
poetry.lock
|
Update poetry lock. (#2698)
|
2024-10-28 06:11:33 +01:00 |
pyproject.toml
|
Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels (#2688)
|
2024-10-25 16:40:47 +02:00 |
requirements_cuda.txt
|
feat: natively support Granite models (#2682)
|
2024-10-23 10:04:05 +00:00 |
requirements_intel.txt
|
feat: natively support Granite models (#2682)
|
2024-10-23 10:04:05 +00:00 |
requirements_rocm.txt
|
feat: natively support Granite models (#2682)
|
2024-10-23 10:04:05 +00:00 |