hf_text-generation-inference/server/text_generation_server/utils
Nicolas Patry 49b4b33e80
feat(server): Update convert logic. (#483)
Should be more robust to shared tensors (ok when using
      `from_pretrained). But forcing us to add new checks in our loading
      code (since the chosen key to keep might be different from
      `transformers`).

---------

Co-authored-by: Ubuntu <ubuntu@ip-172-31-41-161.ec2.internal>
2023-06-23 12:40:46 +02:00
..
__init__.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
convert.py feat(server): Update convert logic. (#483) 2023-06-23 12:40:46 +02:00
dist.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
hub.py feat(server): improve flash attention import errors (#465) 2023-06-19 09:53:45 +02:00
layers.py feat(server): optimize dist ops (#434) 2023-06-09 11:55:29 +02:00
logits_process.py fix(server): fix warpers on CPU (#472) 2023-06-20 11:06:10 +02:00
tokens.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
watermark.py fix(server): fix flash-neox scores warping (#137) 2023-03-24 18:21:41 +01:00
weights.py feat(server): Update convert logic. (#483) 2023-06-23 12:40:46 +02:00