hf_text-generation-inference/docs
Nicolas Patry 0c9b6cdd76
Choosing input/total tokens automatically based on available VRAM? (#2673)
* Choosing input/total tokens automatically based on available VRAM?

* Update doc.

* Remove generated files.

* Trying to fix non chunking targets.

* Attempt #2

* fix.

* QuantLinear is rocm compatible.

* Much simpler logic after the overhead.

* Updating logic + non flash.

* Revert doc text.

* Simple updates.

* Fix integration mt0 (transformers update).
2024-10-28 04:59:49 +01:00
..
source Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
README.md Update documentation version to 2.0.4 (#1980) 2024-05-31 16:03:24 +02:00
index.html chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
openapi.json chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00

README.md

Documentation available at: https://huggingface.co/docs/text-generation-inference

Release

When making a release, please update the latest version in the documentation with:

export OLD_VERSION="2\.0\.3"
export NEW_VERSION="2\.0\.4"
find . -name '*.md' -exec sed -i -e "s/$OLD_VERSION/$NEW_VERSION/g" {} \;