hf_text-generation-inference/server
Daniël de Kok 36dd16017c Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-05-30 11:28:05 +02:00
..
custom_kernels
exllama_kernels
exllamav2_kernels
tests
text_generation_server Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
.gitignore
Makefile
Makefile-awq
Makefile-eetq
Makefile-flash-att
Makefile-flash-att-v2
Makefile-selective-scan
Makefile-vllm
README.md
poetry.lock Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
pyproject.toml Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_cuda.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_rocm.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev