hf_text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma 8f66d323d0
Update vllm kernels for ROCM (#2826)
* (vllm) updated vllm rocm kernels

* revert silu

* update partition size

* remove grouped_topk

* (nit) remove log

* update moe-kernels commit
2024-12-18 12:44:42 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
ipex.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
kv_cache.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
rocm.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00