hf_text-generation-inference/server/text_generation_server/models
drbh 04e1af94d7
Enable multiple LoRa adapters (#2010)
* feat: first draft load multiple lora

* feat: load weights within layer and refactor lora pass

* fix: refactor and reduce lora math

* feat: baseline impl single request multi lora support

* feat: prefer lorax implementation and port loading logic

* fix: prefer adapter_data and refactors

* feat: perfer loraxs custom punica kernels and add mlp loras

* fix: adjust batch for bgmv

* fix: adjust adapter_segments logic when in batch

* fix: refactor and move changes to v3 proto

* fix: pass model_id for all flash causal lms

* fix: pass model_id for all causal and seq2seq lms

* fix: add model_id to model test

* feat: add lora support to mistral and refactors

* feat: prefer model id in request

* fix: include rust code for adapter id

* feat: bump launcher and add new lora docs

* feat: support base model generation and refactors

* fix: rename doc to retry ci build

* feat: support if vlm models

* fix: add adapter_data param and avoid missing layers

* fix: add adapter_data param to phi and neox

* fix: update all models forwards to include adapter_data

* fix: add model_id to IdeficsCausalLM

* Update lora.md

Fixed a typo

* Update lora.md

Fixing spam image

* fix: add lora kernel to dockerfile, support running without kernels and refactors

* fix: avoid dockerfile conflict

* fix: refactors and adjust flash llama lora logic

* fix: skip llama test due to CI issue (temp)

* fix: skip llama test CI (temp) 2

* fix: revert skips and prefer updated ci token for tests

* fix: refactors and helpful comments

* fix: add noop in TensorParallelAdapterRowLinear too

* fix: refactor and move shard_lora_weights logic

* fix: exit early if no adapter_data

---------

Co-authored-by: Derek <datavistics@gmail.com>
2024-06-25 14:46:27 -04:00
..
custom_modeling Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
bloom.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_cohere.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_dbrx.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_gemma.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_gpt2.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_llama.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_mistral.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_mixtral.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_neox.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_phi.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_qwen2.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_rw.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_santacoder.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_starcoder2.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
galactica.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
globals.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
gpt_neox.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
idefics.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
idefics2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
llava_next.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
mamba.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
model.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
mpt.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
opt.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
pali_gemma.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
phi.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
rw.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
santacoder.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
seq2seq_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
t5.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
types.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
vlm_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00