hf_text-generation-inference/server
SeongBeomLEE 66914f7b19
fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637)
# What does this PR do?

A few cases where you're using a mistral structure or mixtral structure
but not a llama tokenizer, why not make it to call the AutoTokenizer in
exception handling.

Similar PR #619

@Narsil
2024-03-22 17:13:13 +01:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests feat(server): add frequency penalty (#1541) 2024-02-08 18:41:25 +01:00
text_generation_server fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637) 2024-03-22 17:13:13 +01:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile v1.4.1 (#1568) 2024-02-16 17:50:57 +01:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq feat: eetq gemv optimization when batch_size <= 4 (#1502) 2024-01-31 12:05:49 +01:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-flash-att-v2 `make install-flash-attn-v2-cuda` should work like `make install-flash-attn-v2` used to work. (#1294) 2023-11-28 16:28:40 +01:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Speculative (#1308) 2023-12-11 12:46:30 +01:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock v1.4.1 (#1568) 2024-02-16 17:50:57 +01:00
pyproject.toml fix: improve tool type, bump pydantic and outlines (#1650) 2024-03-21 12:45:56 -04:00
requirements_common.txt Add RoCm support (#1243) 2023-11-27 14:08:12 +01:00
requirements_cuda.txt v1.4.1 (#1568) 2024-02-16 17:50:57 +01:00
requirements_rocm.txt v1.4.1 (#1568) 2024-02-16 17:50:57 +01:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev