__init__.py
|
Add support for FP8 KV cache scales (#2628)
|
2024-10-24 16:36:18 +02:00 |
common.py
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
cuda.py
|
Remove vLLM dependency for CUDA (#2751)
|
2024-11-17 17:34:50 +01:00 |
flash_attn_triton.py
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
ipex.py
|
Add support for FP8 KV cache scales (#2628)
|
2024-10-24 16:36:18 +02:00 |
kv_cache.py
|
Update vllm kernels for ROCM (#2826)
|
2024-12-18 12:44:42 +01:00 |
rocm.py
|
Update vllm kernels for ROCM (#2826)
|
2024-12-18 12:44:42 +01:00 |