hf_text-generation-inference/server/text_generation_server/utils
Martin Iglesias Goyanes 9192de57cc
Fixing frequency penalty (#1811)
Thank you so much for the work you are doing, this is my little
contribution to this great thing you have built. I hope it is useful and
helpful, please don't hesitate to discuss any matters that are not
clear!

I am basing my implementation of frequency penalty on OpenAI's
implementation:
https://platform.openai.com/docs/guides/text-generation/parameter-details

The problem I see with TGI's current implementation is that is not
taking into account the frequency of tokens which have already been
sampled in the current generation stream. Also, the scaling is of the
adjusted token logits is done differently for positive and negative
logits. While in OpenAI's implementation token frequency is taking into
account and the scaling is always done with a subtraction (if penalty is
positive) or add operation (if penalty is negative).

This leads to corrupt generations as I mentioned in issue #1810 .
Moreover, after my tests, other issues are also gone like the one about
some request's with ``penalty_frequency = 1.0`` overruling other
requests (with ``frequency_penalty = 0.0``) in the same batch and
therefore corrupting all generations in the batch. Basically, padding
does not affect this implementation so I believe this ``score *=
input_ids.ne(0)`` is not needed anymore.



Frequency penalty | -1.0 | 0.0 | 1.0
-- | -- | -- | --
Before my change | https://paste.mozilla.org/JxqGJkWY |
https://paste.mozilla.org/hrztJ56h | https://paste.mozilla.org/pBSEH2zw
After my change | https://paste.mozilla.org/7gXCi7zo |
https://paste.mozilla.org/ZR9rJ92g | https://paste.mozilla.org/gHaD2YnC

---------

Co-authored-by: martini <martin.iglesiasgoyanes@adyen.com>
2024-04-30 12:13:23 +02:00
..
awq ROCm AWQ support (#1514) 2024-02-09 10:45:16 +01:00
gptq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py add intel xpu support for TGI (#1475) 2024-04-26 15:48:58 +02:00
flash_attn.py add intel xpu support for TGI (#1475) 2024-04-26 15:48:58 +02:00
hub.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-02-26 19:49:28 +01:00
import_utils.py Dummy CI run. (#1817) 2024-04-26 19:19:55 +02:00
layers.py fix: use get_speculate to the number of layers (#1737) 2024-04-30 11:45:26 +02:00
log.py v1.3.4 2023-12-22 15:46:04 +01:00
logits_process.py Fixing frequency penalty (#1811) 2024-04-30 12:13:23 +02:00
paged_attention.py add intel xpu support for TGI (#1475) 2024-04-26 15:48:58 +02:00
peft.py fix: fix local loading for .bin models (#1419) 2024-01-09 15:21:00 +01:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py Use the generation config. (#1808) 2024-04-25 19:41:50 +02:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Phi3 support (#1797) 2024-04-23 18:40:05 +02:00