hf_text-generation-inference/server/text_generation_server
fxmarty 650fea1834
GPTQ support on ROCm (#1489)
Tested with
```
CUDA_VISIBLE_DEVICES=0 text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq
EXLLAMA_VERSION=1 CUDA_VISIBLE_DEVICES=0 text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq
CUDA_VISIBLE_DEVICES="0,1" text-generation-launcher --model-id TheBloke/Llama-2-7B-Chat-GPTQ --quantize gptq
```

all with good and identical results on MI210.

---------

Co-authored-by: Felix Marty <felix@hf.co>
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
Co-authored-by: OlivierDehaene <23298448+OlivierDehaene@users.noreply.github.com>
2024-01-26 16:27:44 +01:00
..
models Add sealion mpt support (#1477) 2024-01-26 14:05:02 +01:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils GPTQ support on ROCm (#1489) 2024-01-26 16:27:44 +01:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Fix local load for Medusa (#1420) 2024-01-10 18:36:20 +01:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py fix: fix gpt-q with groupsize = -1 (#1358) 2023-12-18 16:07:05 +01:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00