hf_text-generation-inference/server
fxmarty 362883f259
fix(server): llama v2 GPTQ (#648)
As per title & reported
https://github.com/huggingface/text-generation-inference/issues/601#issuecomment-1641435956
https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/discussions/5

Test it:

```
GPTQ_BITS=4 GPTQ_GROUPSIZE=1 text-generation-launcher --model-id TheBloke/Llama-2-70B-chat-GPTQ --port 8080 --num-shard 4 --quantize gptq
```
&
```
curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"hey llama","parameters":{"max_new_tokens":256}}' \
    -H 'Content-Type: application/json'
```
2023-07-20 15:02:54 +02:00
..
custom_kernels feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
tests fix(server): harden the weights choice to save on disk. (#561) 2023-07-07 14:50:12 +02:00
text_generation_server fix(server): llama v2 GPTQ (#648) 2023-07-20 15:02:54 +02:00
.gitignore feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
Makefile feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
Makefile-flash-att feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
Makefile-flash-att-v2 feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
Makefile-vllm feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
pyproject.toml v0.9.3 (#634) 2023-07-18 18:11:20 +02:00
requirements.txt feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev