hf_text-generation-inference/backends
Morgan Funtowicz dc6435e3a5 feat(backend): create llama_context_params with default factory 2024-11-28 23:57:13 +01:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
grpc-metadata Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
llamacpp feat(backend): create llama_context_params with default factory 2024-11-28 23:57:13 +01:00
trtllm chore: remove unrelated change to trtllm 2024-11-22 15:42:09 +01:00
v2 Fixing "deadlock" when python prompts for trust_remote_code by always (#2664) 2024-10-25 06:39:21 +02:00
v3 Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00