hf_text-generation-inference/server/text_generation_server
xiaobin 4cce84301b
fit for baichuan models (#981)
As more and more people begin to use Baichuan's open-source models, the
influence of Baichuan models is growing, especially in China. Many
community members are interested in adding support for Baichuan models
to TGI. Meanwhile, Baichuan is a very open company, and in the future,
it plans to open-source more and more models, taking all this into
consideration, we would like to add support for the Baichuan model to
TGI. To do this, we need to make some changes, which we hope can be
merged into the main branch of TGI. In the future, we would be happy to
help maintain support for Baichuan models in TGI. We sincerely hope that
our pull request can be accepted. Thank you.

By the way, the changes of this time mainly for supporting Baichuan-7B.

---------

Co-authored-by: xiaoyuze <xiaoyuze@baichuan.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2023-09-08 16:51:34 +02:00
..
models fit for baichuan models (#981) 2023-09-08 16:51:34 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils fit for baichuan models (#981) 2023-09-08 16:51:34 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Fixing the lora adaptation on docker. (#935) 2023-08-28 11:13:24 +02:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py Adding Idefics multi modal model. (#842) 2023-08-17 14:38:49 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00