hf_text-generation-inference/server
zhangsibo1129 eba6ab1c5d
fix discard_names bug in safetensors convertion (#1052)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Model Class attributes `_tied_weights_keys`, `
_keys_to_ignore_on_load_missing` can only be `None` or a List.
`getattr(class_, "_keys_to_ignore_on_load_missing", [])` will return
`None` if `_keys_to_ignore_on_load_missing` is None, and
`discard_names.extend(None)` will trigger an exception, even though
`_tied_weights_keys` exists.

## Who can review?

@OlivierDehaene  @Narsil

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2023-09-26 15:05:40 +02:00
..
custom_kernels feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
exllama_kernels feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
tests Rebased #617 (#868) 2023-08-28 11:43:47 +02:00
text_generation_server fix discard_names bug in safetensors convertion (#1052) 2023-09-26 15:05:40 +02:00
.gitignore Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
Makefile Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
Makefile-awq Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
Makefile-flash-att feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
Makefile-flash-att-v2 feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
Makefile-vllm Backport https://github.com/vllm-project/vllm/pull/936 (#977) 2023-09-04 15:00:19 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
pyproject.toml Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
requirements.txt New release. (#941) 2023-08-29 14:28:22 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev