hf_text-generation-inference/server/text_generation_server/layers/gptq
drbh bab02ff2bc
feat: add ruff and resolve issue (#2262)
* feat: add ruff and resolve issue

* fix: update client exports and adjust after rebase

* fix: adjust syntax to avoid circular import

* fix: adjust client ruff settings

* fix: lint and refactor import check and avoid model enum as global names

* fix: improve fbgemm_gpu check and lints

* fix: update lints

* fix: prefer comparing model enum over str

* fix: adjust lints and ignore specific rules

* fix: avoid unneeded quantize check
2024-07-26 10:29:09 -04:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
custom_autotune.py Some small fixes for the Torch 2.4.0 update (#2304) 2024-07-25 13:34:44 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
quant_linear.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
quantize.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
utils.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00