Add initial support for compressed-tensors checkpoints (#2732)

compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
This commit is contained in:
Daniël de Kok 2024-11-10 13:54:07 +01:00 committed by GitHub
parent 97f7a22f0b
commit a785000842
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
28 changed files with 2052 additions and 78 deletions

View File

@ -247,7 +247,7 @@ COPY server/Makefile server/Makefile
RUN cd server && \
make gen-server && \
pip install -r requirements_cuda.txt && \
pip install ".[bnb, accelerate, marlin, moe, quantize, peft, outlines]" --no-cache-dir && \
pip install ".[bnb, accelerate, compressed-tensors, marlin, moe, quantize, peft, outlines]" --no-cache-dir && \
pip install nvidia-nccl-cu12==2.22.3
ENV LD_PRELOAD=/opt/conda/lib/python3.11/site-packages/nvidia/nccl/lib/libnccl.so.2

View File

@ -296,7 +296,7 @@ COPY server/Makefile server/Makefile
RUN cd server && \
make gen-server && \
pip install -r requirements_rocm.txt && \
pip install ".[accelerate, peft, outlines]" --no-cache-dir
pip install ".[accelerate, compressed-tensors, peft, outlines]" --no-cache-dir
# Install benchmarker
COPY --from=builder /usr/src/target/release-opt/text-generation-benchmark /usr/local/bin/text-generation-benchmark

View File

@ -102,7 +102,7 @@ COPY server/Makefile server/Makefile
RUN cd server && \
make gen-server && \
pip install -r requirements_intel.txt && \
pip install ".[accelerate, peft, outlines]" --no-cache-dir
pip install ".[accelerate, compressed-tensors, peft, outlines]" --no-cache-dir
ENV CCL_ROOT=/opt/intel/oneapi/ccl/latest
ENV I_MPI_ROOT=/opt/intel/oneapi/mpi/latest

View File

@ -62,15 +62,16 @@ Options:
[env: QUANTIZE=]
Possible values:
- awq: 4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency
- eetq: 8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
- exl2: Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1)
- gptq: 4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
- marlin: 4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin>
- bitsandbytes: Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
- bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
- bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
- fp8: [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations
- awq: 4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency
- compressed-tensors: Compressed tensors, which can be a mixture of different quantization methods
- eetq: 8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
- exl2: Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1)
- gptq: 4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
- marlin: 4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin>
- bitsandbytes: Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
- bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
- bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
- fp8: [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations
```
## SPECULATE

View File

@ -978,15 +978,16 @@
"nixpkgs": "nixpkgs_6"
},
"locked": {
"lastModified": 1730724647,
"narHash": "sha256-SVv+50CGaCoU4zZwsg6ZAaOi/D5QJBL1P2SIB+3CEf4=",
"lastModified": 1730795478,
"narHash": "sha256-xpkXDKnkhXO4F6Ea3reHmqwXXRzQe2PsxdRQFPCViWs=",
"owner": "huggingface",
"repo": "text-generation-inference-nix",
"rev": "1512898a1e5ad9eff025205fa9c4d33a44506cf3",
"rev": "b7f6c07867d94d6e55f5352573a6b3dad1c88e56",
"type": "github"
},
"original": {
"owner": "huggingface",
"ref": "compressed-tensors-0.7.1",
"repo": "text-generation-inference-nix",
"type": "github"
}

View File

@ -5,7 +5,7 @@
inputs.nixpkgs.follows = "tgi-nix/nixpkgs";
};
nix-filter.url = "github:numtide/nix-filter";
tgi-nix.url = "github:huggingface/text-generation-inference-nix";
tgi-nix.url = "github:huggingface/text-generation-inference-nix/compressed-tensors-0.7.1";
nixpkgs.follows = "tgi-nix/nixpkgs";
flake-utils.url = "github:numtide/flake-utils";
rust-overlay = {

View File

@ -0,0 +1,104 @@
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.609375,
"text": "What"
},
{
"id": 374,
"logprob": -0.92529297,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94628906,
"text": " learning"
},
{
"id": 30,
"logprob": -2.9042969,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 18682,
"logprob": -0.8769531,
"special": false,
"text": " Deep"
},
{
"id": 6975,
"logprob": -0.0076942444,
"special": false,
"text": " learning"
},
{
"id": 374,
"logprob": -0.25073242,
"special": false,
"text": " is"
},
{
"id": 264,
"logprob": -0.097595215,
"special": false,
"text": " a"
},
{
"id": 955,
"logprob": -0.921875,
"special": false,
"text": " type"
},
{
"id": 315,
"logprob": -0.00027918816,
"special": false,
"text": " of"
},
{
"id": 21075,
"logprob": -0.5527344,
"special": false,
"text": " artificial"
},
{
"id": 11478,
"logprob": -0.042541504,
"special": false,
"text": " intelligence"
},
{
"id": 320,
"logprob": -0.38891602,
"special": false,
"text": " ("
},
{
"id": 15836,
"logprob": -0.0011043549,
"special": false,
"text": "AI"
}
],
"top_tokens": null
},
"generated_text": " Deep learning is a type of artificial intelligence (AI"
}

View File

@ -0,0 +1,99 @@
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.609375,
"text": "What"
},
{
"id": 374,
"logprob": -0.92529297,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94628906,
"text": " learning"
}
],
"seed": 0,
"tokens": [
{
"id": 5380,
"logprob": -0.23840332,
"special": false,
"text": "?\n"
},
{
"id": 34564,
"logprob": 0.0,
"special": false,
"text": "Deep"
},
{
"id": 6975,
"logprob": 0.0,
"special": false,
"text": " learning"
},
{
"id": 11,
"logprob": 0.0,
"special": false,
"text": ","
},
{
"id": 1101,
"logprob": -1.2011719,
"special": false,
"text": " also"
},
{
"id": 3967,
"logprob": 0.0,
"special": false,
"text": " known"
},
{
"id": 439,
"logprob": 0.0,
"special": false,
"text": " as"
},
{
"id": 30828,
"logprob": 0.0,
"special": false,
"text": " neural"
},
{
"id": 4009,
"logprob": -0.6777344,
"special": false,
"text": " network"
},
{
"id": 477,
"logprob": 0.0,
"special": false,
"text": " or"
}
],
"top_tokens": null
},
"generated_text": "What is deep learning?\nDeep learning, also known as neural network or"
}

View File

@ -0,0 +1,418 @@
[
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.609375,
"text": "What"
},
{
"id": 374,
"logprob": -0.92529297,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94628906,
"text": " learning"
},
{
"id": 30,
"logprob": -2.9042969,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 18682,
"logprob": -0.8769531,
"special": false,
"text": " Deep"
},
{
"id": 6975,
"logprob": -0.0076942444,
"special": false,
"text": " learning"
},
{
"id": 374,
"logprob": -0.25146484,
"special": false,
"text": " is"
},
{
"id": 264,
"logprob": -0.097595215,
"special": false,
"text": " a"
},
{
"id": 955,
"logprob": -0.9248047,
"special": false,
"text": " type"
},
{
"id": 315,
"logprob": -0.00027513504,
"special": false,
"text": " of"
},
{
"id": 21075,
"logprob": -0.5527344,
"special": false,
"text": " artificial"
},
{
"id": 11478,
"logprob": -0.043151855,
"special": false,
"text": " intelligence"
},
{
"id": 320,
"logprob": -0.3840332,
"special": false,
"text": " ("
},
{
"id": 15836,
"logprob": -0.0011043549,
"special": false,
"text": "AI"
}
],
"top_tokens": null
},
"generated_text": " Deep learning is a type of artificial intelligence (AI"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.6054688,
"text": "What"
},
{
"id": 374,
"logprob": -0.92089844,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94433594,
"text": " learning"
},
{
"id": 30,
"logprob": -2.90625,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 18682,
"logprob": -0.875,
"special": false,
"text": " Deep"
},
{
"id": 6975,
"logprob": -0.007698059,
"special": false,
"text": " learning"
},
{
"id": 374,
"logprob": -0.25268555,
"special": false,
"text": " is"
},
{
"id": 264,
"logprob": -0.09753418,
"special": false,
"text": " a"
},
{
"id": 955,
"logprob": -0.92529297,
"special": false,
"text": " type"
},
{
"id": 315,
"logprob": -0.00027942657,
"special": false,
"text": " of"
},
{
"id": 21075,
"logprob": -0.5527344,
"special": false,
"text": " artificial"
},
{
"id": 11478,
"logprob": -0.042541504,
"special": false,
"text": " intelligence"
},
{
"id": 320,
"logprob": -0.3840332,
"special": false,
"text": " ("
},
{
"id": 15836,
"logprob": -0.0011053085,
"special": false,
"text": "AI"
}
],
"top_tokens": null
},
"generated_text": " Deep learning is a type of artificial intelligence (AI"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.6054688,
"text": "What"
},
{
"id": 374,
"logprob": -0.92089844,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94433594,
"text": " learning"
},
{
"id": 30,
"logprob": -2.90625,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 18682,
"logprob": -0.875,
"special": false,
"text": " Deep"
},
{
"id": 6975,
"logprob": -0.007698059,
"special": false,
"text": " learning"
},
{
"id": 374,
"logprob": -0.25268555,
"special": false,
"text": " is"
},
{
"id": 264,
"logprob": -0.09753418,
"special": false,
"text": " a"
},
{
"id": 955,
"logprob": -0.92529297,
"special": false,
"text": " type"
},
{
"id": 315,
"logprob": -0.00027942657,
"special": false,
"text": " of"
},
{
"id": 21075,
"logprob": -0.5527344,
"special": false,
"text": " artificial"
},
{
"id": 11478,
"logprob": -0.042541504,
"special": false,
"text": " intelligence"
},
{
"id": 320,
"logprob": -0.3840332,
"special": false,
"text": " ("
},
{
"id": 15836,
"logprob": -0.0011053085,
"special": false,
"text": "AI"
}
],
"top_tokens": null
},
"generated_text": " Deep learning is a type of artificial intelligence (AI"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 128000,
"logprob": null,
"text": "<|begin_of_text|>"
},
{
"id": 3923,
"logprob": -7.6054688,
"text": "What"
},
{
"id": 374,
"logprob": -0.92089844,
"text": " is"
},
{
"id": 5655,
"logprob": -10.0,
"text": " deep"
},
{
"id": 6975,
"logprob": -0.94433594,
"text": " learning"
},
{
"id": 30,
"logprob": -2.90625,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 18682,
"logprob": -0.875,
"special": false,
"text": " Deep"
},
{
"id": 6975,
"logprob": -0.007698059,
"special": false,
"text": " learning"
},
{
"id": 374,
"logprob": -0.25268555,
"special": false,
"text": " is"
},
{
"id": 264,
"logprob": -0.09753418,
"special": false,
"text": " a"
},
{
"id": 955,
"logprob": -0.92529297,
"special": false,
"text": " type"
},
{
"id": 315,
"logprob": -0.00027942657,
"special": false,
"text": " of"
},
{
"id": 21075,
"logprob": -0.5527344,
"special": false,
"text": " artificial"
},
{
"id": 11478,
"logprob": -0.042541504,
"special": false,
"text": " intelligence"
},
{
"id": 320,
"logprob": -0.3840332,
"special": false,
"text": " ("
},
{
"id": 15836,
"logprob": -0.0011053085,
"special": false,
"text": "AI"
}
],
"top_tokens": null
},
"generated_text": " Deep learning is a type of artificial intelligence (AI"
}
]

View File

@ -0,0 +1,104 @@
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.69140625,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.32226562,
"text": " learning"
},
{
"id": 235336,
"logprob": -0.33203125,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 109,
"logprob": -0.24707031,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": -0.14550781,
"special": false,
"text": "Deep"
},
{
"id": 6044,
"logprob": -0.038330078,
"special": false,
"text": " learning"
},
{
"id": 603,
"logprob": -0.029907227,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": -0.020996094,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.828125,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": -0.00049209595,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": -0.057373047,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": -0.000207901,
"special": false,
"text": " learning"
},
{
"id": 674,
"logprob": -0.15429688,
"special": false,
"text": " that"
}
],
"top_tokens": null
},
"generated_text": "\n\nDeep learning is a subset of machine learning that"
}

View File

@ -0,0 +1,99 @@
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.69140625,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.32226562,
"text": " learning"
}
],
"seed": 0,
"tokens": [
{
"id": 235336,
"logprob": 0.0,
"special": false,
"text": "?"
},
{
"id": 109,
"logprob": 0.0,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": 0.0,
"special": false,
"text": "Deep"
},
{
"id": 14715,
"logprob": -0.38671875,
"special": false,
"text": " Learning"
},
{
"id": 603,
"logprob": 0.0,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": 0.0,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.12695312,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": 0.0,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": 0.0,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": 0.0,
"special": false,
"text": " learning"
}
],
"top_tokens": null
},
"generated_text": "What is deep learning?\n\nDeep Learning is a subset of machine learning"
}

View File

@ -0,0 +1,418 @@
[
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.69140625,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.32226562,
"text": " learning"
},
{
"id": 235336,
"logprob": -0.33203125,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 109,
"logprob": -0.24707031,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": -0.14550781,
"special": false,
"text": "Deep"
},
{
"id": 6044,
"logprob": -0.03857422,
"special": false,
"text": " learning"
},
{
"id": 603,
"logprob": -0.030883789,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": -0.020996094,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.828125,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": -0.00051498413,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": -0.05883789,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": -0.00020694733,
"special": false,
"text": " learning"
},
{
"id": 674,
"logprob": -0.15820312,
"special": false,
"text": " that"
}
],
"top_tokens": null
},
"generated_text": "\n\nDeep learning is a subset of machine learning that"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.71484375,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.30859375,
"text": " learning"
},
{
"id": 235336,
"logprob": -0.3359375,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 109,
"logprob": -0.23828125,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": -0.14550781,
"special": false,
"text": "Deep"
},
{
"id": 6044,
"logprob": -0.038330078,
"special": false,
"text": " learning"
},
{
"id": 603,
"logprob": -0.030883789,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": -0.020996094,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.80859375,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": -0.0005455017,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": -0.05908203,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": -0.00020599365,
"special": false,
"text": " learning"
},
{
"id": 674,
"logprob": -0.17285156,
"special": false,
"text": " that"
}
],
"top_tokens": null
},
"generated_text": "\n\nDeep learning is a subset of machine learning that"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.71484375,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.30859375,
"text": " learning"
},
{
"id": 235336,
"logprob": -0.3359375,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 109,
"logprob": -0.23828125,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": -0.14550781,
"special": false,
"text": "Deep"
},
{
"id": 6044,
"logprob": -0.038330078,
"special": false,
"text": " learning"
},
{
"id": 603,
"logprob": -0.030883789,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": -0.020996094,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.80859375,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": -0.0005455017,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": -0.05908203,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": -0.00020599365,
"special": false,
"text": " learning"
},
{
"id": 674,
"logprob": -0.17285156,
"special": false,
"text": " that"
}
],
"top_tokens": null
},
"generated_text": "\n\nDeep learning is a subset of machine learning that"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 2,
"logprob": null,
"text": "<bos>"
},
{
"id": 1841,
"logprob": -5.46875,
"text": "What"
},
{
"id": 603,
"logprob": -0.71484375,
"text": " is"
},
{
"id": 5271,
"logprob": -12.0,
"text": " deep"
},
{
"id": 6044,
"logprob": -0.30859375,
"text": " learning"
},
{
"id": 235336,
"logprob": -0.3359375,
"text": "?"
}
],
"seed": null,
"tokens": [
{
"id": 109,
"logprob": -0.23828125,
"special": false,
"text": "\n\n"
},
{
"id": 26843,
"logprob": -0.14550781,
"special": false,
"text": "Deep"
},
{
"id": 6044,
"logprob": -0.038330078,
"special": false,
"text": " learning"
},
{
"id": 603,
"logprob": -0.030883789,
"special": false,
"text": " is"
},
{
"id": 476,
"logprob": -0.020996094,
"special": false,
"text": " a"
},
{
"id": 38397,
"logprob": -0.80859375,
"special": false,
"text": " subset"
},
{
"id": 576,
"logprob": -0.0005455017,
"special": false,
"text": " of"
},
{
"id": 6479,
"logprob": -0.05908203,
"special": false,
"text": " machine"
},
{
"id": 6044,
"logprob": -0.00020599365,
"special": false,
"text": " learning"
},
{
"id": 674,
"logprob": -0.17285156,
"special": false,
"text": " that"
}
],
"top_tokens": null
},
"generated_text": "\n\nDeep learning is a subset of machine learning that"
}
]

View File

@ -0,0 +1,86 @@
import pytest
@pytest.fixture(scope="module")
def compressed_tensors_w8an_handle(launcher):
with launcher(
"neuralmagic/Llama-3.2-1B-Instruct-FP8",
num_shard=2,
quantize="compressed-tensors",
) as handle:
yield handle
@pytest.fixture(scope="module")
async def compressed_tensors_w8an(compressed_tensors_w8an_handle):
await compressed_tensors_w8an_handle.health(300)
return compressed_tensors_w8an_handle.client
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_w8an(compressed_tensors_w8an, response_snapshot):
response = await compressed_tensors_w8an.generate(
"What is deep learning?",
max_new_tokens=10,
decoder_input_details=True,
)
assert (
response.generated_text
== " Deep learning is a type of artificial intelligence (AI"
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.asyncio
async def test_compressed_tensors_w8an_all_params(
compressed_tensors_w8an, response_snapshot
):
response = await compressed_tensors_w8an.generate(
"What is deep learning",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert (
response.generated_text
== "What is deep learning?\nDeep learning, also known as neural network or"
)
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_w8an_load(
compressed_tensors_w8an, generate_load, response_snapshot
):
responses = await generate_load(
compressed_tensors_w8an,
"What is deep learning?",
max_new_tokens=10,
n=4,
)
assert (
responses[0].generated_text
== " Deep learning is a type of artificial intelligence (AI"
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot

View File

@ -0,0 +1,86 @@
import pytest
@pytest.fixture(scope="module")
def compressed_tensors_wna16_handle(launcher):
with launcher(
"neuralmagic/gemma-2-2b-it-quantized.w4a16",
num_shard=2,
quantize="compressed-tensors",
) as handle:
yield handle
@pytest.fixture(scope="module")
async def compressed_tensors_wna16(compressed_tensors_wna16_handle):
await compressed_tensors_wna16_handle.health(300)
return compressed_tensors_wna16_handle.client
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_wna16(compressed_tensors_wna16, response_snapshot):
response = await compressed_tensors_wna16.generate(
"What is deep learning?",
max_new_tokens=10,
decoder_input_details=True,
)
assert (
response.generated_text
== "\n\nDeep learning is a subset of machine learning that"
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.asyncio
async def test_compressed_tensors_wna16_all_params(
compressed_tensors_wna16, response_snapshot
):
response = await compressed_tensors_wna16.generate(
"What is deep learning",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert (
response.generated_text
== "What is deep learning?\n\nDeep Learning is a subset of machine learning"
)
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_wna16_load(
compressed_tensors_wna16, generate_load, response_snapshot
):
responses = await generate_load(
compressed_tensors_wna16,
"What is deep learning?",
max_new_tokens=10,
n=4,
)
assert (
responses[0].generated_text
== "\n\nDeep learning is a subset of machine learning that"
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot

View File

@ -212,6 +212,8 @@ enum Quantization {
/// <https://hf.co/models?search=awq>.
/// Should replace GPTQ models wherever possible because of the better latency
Awq,
/// Compressed tensors, which can be a mixture of different quantization methods.
CompressedTensors,
/// 8 bit quantization, doesn't require specific model.
/// Should be a drop-in replacement to bitsandbytes with much better performance.
/// Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
@ -274,6 +276,9 @@ impl std::fmt::Display for Quantization {
Quantization::Awq => {
write!(f, "awq")
}
Quantization::CompressedTensors => {
write!(f, "compressed-tensors")
}
Quantization::Eetq => {
write!(f, "eetq")
}

View File

@ -5,6 +5,7 @@
mypy-protobuf,
awq-inference-engine,
causal-conv1d,
compressed-tensors,
eetq,
einops,
exllamav2,
@ -74,6 +75,7 @@ buildPythonPackage {
awq-inference-engine
eetq
causal-conv1d
compressed-tensors
einops
exllamav2
flashinfer