hf_text-generation-inference/server
Daniël de Kok e87893d38e
chore: Update to marlin-kernels 0.3.6 (#2771)
This fixes a bug in 2:4 Marlin:
https://github.com/vllm-project/vllm/pull/10464
2024-11-22 14:44:47 +00:00
..
custom_kernels
exllama_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
exllamav2_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
tests
text_generation_server feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
.gitignore
Makefile Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
Makefile-awq
Makefile-eetq
Makefile-exllamav2
Makefile-flash-att
Makefile-flash-att-v2
Makefile-flashinfer Prefix test - Different kind of load test to trigger prefix test bugs. (#2490) 2024-09-11 18:10:40 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan
Makefile-vllm Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
README.md
poetry.lock chore: Update to marlin-kernels 0.3.6 (#2771) 2024-11-22 14:44:47 +00:00
pyproject.toml chore: Update to marlin-kernels 0.3.6 (#2771) 2024-11-22 14:44:47 +00:00
requirements_cuda.txt Upgrading our deps. (#2750) 2024-11-15 14:03:27 +01:00
requirements_intel.txt Upgrading our deps. (#2750) 2024-11-15 14:03:27 +01:00
requirements_rocm.txt Upgrading our deps. (#2750) 2024-11-15 14:03:27 +01:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev