hf_text-generation-inference/server
OlivierDehaene 53ec0b790b
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
* feat(fp8): add support for fbgemm

* allow loading fp8 weights directly

* update outlines

* fix makefile

* build fbgemm

* avoid circular import and fix dockerfile

* add default dtype

* refactored weights loader

* fix auto conversion

* fix quantization config parsing

* force new nccl on install

* missing get_weights implementation

* increase timeout
2024-07-20 19:02:04 +02:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
marlin Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
tests Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
text_generation_server feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-fbgemm feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
Makefile-flash-att Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
fbgemm_remove_unused.patch feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
fix_torch90a.sh feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
poetry.lock Making `make install` work better by default. (#2004) 2024-06-04 19:38:46 +02:00
pyproject.toml Making `make install` work better by default. (#2004) 2024-06-04 19:38:46 +02:00
requirements_cuda.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_intel.txt reable xpu, broken by gptq and setuptool upgrade (#1988) 2024-06-03 16:07:50 +02:00
requirements_rocm.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev