hf_text-generation-inference/backends
Morgan Funtowicz 84eead219a feat(backend): correctly setup llama_context providing n_threads and n_ubatch 2024-11-21 21:43:50 +01:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
grpc-metadata Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
llamacpp feat(backend): correctly setup llama_context providing n_threads and n_ubatch 2024-11-21 21:43:50 +01:00
trtllm feat(llamacpp): initial end2end build 2024-11-14 08:42:01 +01:00
v2 Fixing "deadlock" when python prompts for trust_remote_code by always (#2664) 2024-10-25 06:39:21 +02:00
v3 Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00