hf_text-generation-inference/server/text_generation_server
fxmarty 362883f259
fix(server): llama v2 GPTQ (#648)
As per title & reported
https://github.com/huggingface/text-generation-inference/issues/601#issuecomment-1641435956
https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/discussions/5

Test it:

```
GPTQ_BITS=4 GPTQ_GROUPSIZE=1 text-generation-launcher --model-id TheBloke/Llama-2-70B-chat-GPTQ --port 8080 --num-shard 4 --quantize gptq
```
&
```
curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"hey llama","parameters":{"max_new_tokens":256}}' \
    -H 'Content-Type: application/json'
```
2023-07-20 15:02:54 +02:00
..
models fix(server): llama v2 GPTQ (#648) 2023-07-20 15:02:54 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils Add trust_remote_code to quantize script (#647) 2023-07-20 13:53:08 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py feat(server): Reworking the quantization script so it's still universal (not llama specific) (#587) 2023-07-18 12:19:05 +02:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py feat(server): auto max_batch_total_tokens for flash att models (#630) 2023-07-19 09:31:25 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00