hf_text-generation-inference/clients/python/text_generation
Nicolas Patry d18ed5cfc5
Mllama flash version (#2585)
* Working loading state.

* Preprocessing.

* Working state ? (Broke idefics1 temporarily).

* Cleaner condition.

* Fix idefics.

* Updating config, removing TODO

* Mllama

* Ugrade transformers 4.45

* Flashing mllama.

* Starting to get there.

* Working state.

* Integrations tests for mllama (cutting to 10 tokens because there seems'
to be instability after (meaning size of the batch matters.

* Updating model link.

* Earlier assert.

* Fix vlm ?

* remove log.

* Force ignore all images but last.

* Default dtype bfloat16.

* Update integration test after switch to bf16.

* Remove dead code.

* Removed dead code.

* Upgrade the flake to latest transformers/tokenizers

* Move to hf tgi-nix

* Upgrade to 0.5.0
2024-10-02 11:22:13 +02:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
client.py Pr 2451 ci branch (#2454) 2024-08-26 20:19:38 -04:00
errors.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
inference_api.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
types.py Mllama flash version (#2585) 2024-10-02 11:22:13 +02:00