hf_text-generation-inference/server/text_generation_server/models/custom_modeling
fxmarty 362883f259
fix(server): llama v2 GPTQ (#648)
As per title & reported
https://github.com/huggingface/text-generation-inference/issues/601#issuecomment-1641435956
https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/discussions/5

Test it:

```
GPTQ_BITS=4 GPTQ_GROUPSIZE=1 text-generation-launcher --model-id TheBloke/Llama-2-70B-chat-GPTQ --port 8080 --num-shard 4 --quantize gptq
```
&
```
curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"hey llama","parameters":{"max_new_tokens":256}}' \
    -H 'Content-Type: application/json'
```
2023-07-20 15:02:54 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
flash_llama_modeling.py fix(server): llama v2 GPTQ (#648) 2023-07-20 15:02:54 +02:00
flash_neox_modeling.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
flash_rw_modeling.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
flash_santacoder_modeling.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
mpt_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
neox_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
opt_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
t5_modeling.py fix(server): Adding logger import to t5_modeling.py (#585) 2023-07-12 10:40:32 +02:00