hf_text-generation-inference/server
shaltielshmid 3961e32390
[WIP] Add support for Mistral-Nemo by supporting head_dim through config (#2254)
* Support passing head_dim through config

* Using `head_dim` as a fallback is necessary since it's a non standard
key in mistralConfig (as defined in transformers).

* Shorter diff.

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-07-23 15:00:07 +02:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
marlin Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
tests Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
text_generation_server [WIP] Add support for Mistral-Nemo by supporting head_dim through config (#2254) 2024-07-23 15:00:07 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-fbgemm feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
Makefile-flash-att Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
fbgemm_remove_unused.patch feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
fix_torch90a.sh feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
poetry.lock Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
pyproject.toml Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
requirements_cuda.txt Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
requirements_intel.txt Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
requirements_rocm.txt Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev