hf_text-generation-inference/server/text_generation_server/models
zhangsibo1129 1e3ec3c91f
Complete FastLinear.load parameters in OPTDecoder initialization (#1060)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

`FastLinear.load` requires 4 parameters, but in the following only 3 are
given. This PR fix this.

```python
# server/text_generation_server/models/custom_modeling/opt_modeling.py
        if config.word_embed_proj_dim != config.hidden_size:
            self.project_out = FastLinear.load(
                config, prefix="model.decoder.project_out", bias=False
            )
        else:
            self.project_out = None

        if config.word_embed_proj_dim != config.hidden_size:
            self.project_in = FastLinear.load(
                config, prefix="model.decoder.project_in", bias=False
```

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @


@OlivierDehaene OR @Narsil

 -->
2023-09-27 12:25:59 +02:00
..
custom_modeling Complete FastLinear.load parameters in OPTDecoder initialization (#1060) 2023-09-27 12:25:59 +02:00
__init__.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
bloom.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
causal_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
flash_causal_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
flash_llama.py Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
flash_neox.py feat(server): Using `quantize_config.json` instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
flash_rw.py Fix Falcon weight mapping for H2O.ai checkpoints (#953) 2023-08-31 21:15:14 +02:00
flash_santacoder.py feat(server): Using `quantize_config.json` instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
galactica.py Fix missing arguments in Galactica's from_pb (#1022) 2023-09-21 08:15:59 +02:00
gpt_neox.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
idefics_causal_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
model.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
mpt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
opt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
seq2seq_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py Rebased #617 (#868) 2023-08-28 11:43:47 +02:00