Fixing x-compute-time. (#1606)
# What does this PR do? It was meant to be in seconds float <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ @OlivierDehaene OR @Narsil -->
This commit is contained in:
parent
9b6db5f793
commit
910d0a9062
|
@ -52,6 +52,8 @@ Text Generation Inference (TGI) is a toolkit for deploying and serving Large Lan
|
||||||
- Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see [transformers.LogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.LogitsProcessor))
|
- Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see [transformers.LogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.LogitsProcessor))
|
||||||
- Stop sequences
|
- Stop sequences
|
||||||
- Log probabilities
|
- Log probabilities
|
||||||
|
- [Speculation](https://huggingface.co/docs/text-generation-inference/conceptual/speculation) ~2x latency
|
||||||
|
- [Guidance/JSON](https://huggingface.co/docs/text-generation-inference/conceptual/guidance). Specify output format to speed up inference and make sure the output is valid according to some specs..
|
||||||
- Custom Prompt Generation: Easily generate text by providing custom prompts to guide the model's output
|
- Custom Prompt Generation: Easily generate text by providing custom prompts to guide the model's output
|
||||||
- Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance
|
- Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance
|
||||||
|
|
||||||
|
|
|
@ -39,4 +39,8 @@
|
||||||
title: Safetensors
|
title: Safetensors
|
||||||
- local: conceptual/flash_attention
|
- local: conceptual/flash_attention
|
||||||
title: Flash Attention
|
title: Flash Attention
|
||||||
|
- local: conceptual/speculation
|
||||||
|
title: Speculation (Medusa, ngram)
|
||||||
|
- local: conceptual/guidance
|
||||||
|
title: Guidance, JSON, tools (using outlines)
|
||||||
title: Conceptual Guides
|
title: Conceptual Guides
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
## Guidance
|
|
@ -0,0 +1,48 @@
|
||||||
|
## Speculation
|
||||||
|
|
||||||
|
Speculative decoding, assisted generation, Medusa, and others are a few different names for the same idea.
|
||||||
|
The idea is to generate tokens *before* the large model actually runs, and only *check* if those tokens where valid.
|
||||||
|
|
||||||
|
So you are making *more* computations on your LLM, but if you are correct you produce 1, 2, 3 etc.. tokens on a single LLM pass. Since LLMs are usually memory bound (and not compute bound), provided your guesses are correct enough, this is a 2-3x faster inference (It can be much more for code oriented tasks for instance).
|
||||||
|
|
||||||
|
You can check a more [detailed explanation](https://huggingface.co/blog/assisted-generation).
|
||||||
|
|
||||||
|
Text-generation inference supports 2 main speculative methods:
|
||||||
|
|
||||||
|
- Medusa
|
||||||
|
- N-gram
|
||||||
|
|
||||||
|
|
||||||
|
### Medusa
|
||||||
|
|
||||||
|
|
||||||
|
Medusa is a [simple method](https://arxiv.org/abs/2401.10774) to create many tokens in a single pass using fine-tuned LM heads in addition to your existing models.
|
||||||
|
|
||||||
|
|
||||||
|
You can check a few existing fine-tunes for popular models:
|
||||||
|
|
||||||
|
- [text-generation-inference/gemma-7b-it-medusa](https://huggingface.co/text-generation-inference/gemma-7b-it-medusa)
|
||||||
|
- [text-generation-inference/Mixtral-8x7B-Instruct-v0.1-medusa](https://huggingface.co/text-generation-inference/Mixtral-8x7B-Instruct-v0.1-medusa)
|
||||||
|
- [text-generation-inference/Mistral-7B-Instruct-v0.2-medusa](https://huggingface.co/text-generation-inference/Mistral-7B-Instruct-v0.2-medusa)
|
||||||
|
|
||||||
|
|
||||||
|
In order to create your own medusa heads for your own finetune, you should check own the original medusa repo. [https://github.com/FasterDecoding/Medusa](https://github.com/FasterDecoding/Medusa)
|
||||||
|
|
||||||
|
|
||||||
|
In order to use medusa models in TGI, simply point to a medusa enabled model, and everything will load automatically.
|
||||||
|
|
||||||
|
|
||||||
|
### N-gram
|
||||||
|
|
||||||
|
|
||||||
|
If you don't have a medusa model, or don't have the resource to fine-tune, you can try to use `n-gram`.
|
||||||
|
Ngram works by trying to find in the previous sequence existing tokens that match, and use those as speculation.
|
||||||
|
|
||||||
|
This is an extremely simple method, which works best for code, or highly repetitive text. This might not be beneficial, if the speculation misses too much.
|
||||||
|
|
||||||
|
|
||||||
|
In order to enable n-gram speculation simply use
|
||||||
|
|
||||||
|
`--speculate 2` in your flags.
|
||||||
|
|
||||||
|
[Details about the flag](https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#speculate)
|
|
@ -242,7 +242,7 @@ async fn generate(
|
||||||
headers.insert("x-compute-type", compute_type.parse().unwrap());
|
headers.insert("x-compute-type", compute_type.parse().unwrap());
|
||||||
headers.insert(
|
headers.insert(
|
||||||
"x-compute-time",
|
"x-compute-time",
|
||||||
total_time.as_millis().to_string().parse().unwrap(),
|
total_time.as_secs_f64().to_string().parse().unwrap(),
|
||||||
);
|
);
|
||||||
headers.insert(
|
headers.insert(
|
||||||
"x-compute-characters",
|
"x-compute-characters",
|
||||||
|
|
Loading…
Reference in New Issue