hf_text-generation-inference/server/text_generation_server
SeongBeomLEE 66914f7b19
fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637)
# What does this PR do?

A few cases where you're using a mistral structure or mixtral structure
but not a llama tokenizer, why not make it to call the AutoTokenizer in
exception handling.

Similar PR #619

@Narsil
2024-03-22 17:13:13 +01:00
..
models fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637) 2024-03-22 17:13:13 +01:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils fix: improve tool type, bump pydantic and outlines (#1650) 2024-03-21 12:45:56 -04:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-02-26 19:49:28 +01:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py fix: fix gpt-q with groupsize = -1 (#1358) 2023-12-18 16:07:05 +01:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00