hf_text-generation-inference/backends/v3/src
Nicolas Patry 0c9b6cdd76
Choosing input/total tokens automatically based on available VRAM? (#2673)
* Choosing input/total tokens automatically based on available VRAM?

* Update doc.

* Remove generated files.

* Trying to fix non chunking targets.

* Attempt #2

* fix.

* QuantLinear is rocm compatible.

* Much simpler logic after the overhead.

* Updating logic + non flash.

* Revert doc text.

* Simple updates.

* Fix integration mt0 (transformers update).
2024-10-28 04:59:49 +01:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
backend.rs feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
block_allocator.rs Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
lib.rs Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
main.rs Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
queue.rs feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
radix.rs Adding a test for FD. (#2516) 2024-09-16 17:00:54 +02:00