hf_text-generation-inference/backends
Morgan Funtowicz 298367cdfd feat(backend): fix when num_cores_per_instance is equals to zero with the size of the generated core allocation 2024-11-28 14:53:35 +01:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
grpc-metadata Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
llamacpp feat(backend): fix when num_cores_per_instance is equals to zero with the size of the generated core allocation 2024-11-28 14:53:35 +01:00
trtllm chore: remove unrelated change to trtllm 2024-11-22 15:42:09 +01:00
v2 Fixing "deadlock" when python prompts for trust_remote_code by always (#2664) 2024-10-25 06:39:21 +02:00
v3 Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00