This change adds support for prefix caching to the v3 router. This
is broken up from the backend support to ease reviewing.
For now prefix caching is only enabled with `USE_PREFIX_CACHING=1`
in this case, the router will switch to `RadixAllocator`. This
allocator uses a radix trie to keep track of prefills that were
seen prior. If a new prefill is a prefix of a previously-seen
prefil, the router will send a request with `prefix_len>0`, which
can be used by the backend to decide to reuse KV blocks from the
cache, rather than recomputing them.
Even though backend support is not added in this PR, the backend
will still work with prefix caching enabled. The prefix lengths
are just ignored and not used.
This change adds support for FlashInfer. FlashInfer can be enabled using
`FLASH_INFER=1` and is currently only implemented in `FlashCausalLM`.
Since this functionality is currently only for testing, FlashInfer is
not installed anywhere yet.
The FlashInfer API is quite different from FlashAttention/vLLM in that
it requires more global bookkeeping:
* A wrapper class needs to be contstructed (which we just call *state*).
Since this is fairly expensive (due to pinned host memory allocation),
we only do this once in a FlashCausalLM instance or for each CUDA
Graph size.
* Each model forward call needs to be wrapped in `begin_forward` and
`end_forward`. This sets up data structures that can be reused for all
calls to attention for that forward call.
When calling attention, we need access to the state object. To avoid
passing an argument down the call chain (which would require changes to
all models), we use a context variable.
Each model forward call is wrapped using a context manager that does all
the bookkeeping for such a call:
* Set the context variable to the forward call's state.
* Call `begin_forward` on the state.
* Yield.
* Call `end_forward` on the state.
* Reset the context variable.
We cannot use a single shared global variable for this, since e.g. CUDA
Graphs of different sizes each have their own state.
* Fix unsigned integer underflow
Passing --max-batch-size to the launcher actually had no effect
because after a few requests the max_size passed to State::next_batch
would underflow becoming a largo positive number.
In the scheduler, as soon as the cached batch size reached the
max_batch_size the max_size passed to next_batch becomes 0.
Since the only check in that funcion is
```
if Some(batch_requests.len()) == max_size {
break;
}
```
and it's called after the `batch_requests.len()` has
become 1, it doesn't do anything to prevent more than 0
requests from being batched.
Now we have cached batch in the server that is large than
max_batch_size and `max_size - batch_size as usize`
underflows.
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
* fix: update v3 scheduler and ensure max_batch_size > 0
---------
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Co-authored-by: Max de Bayser <mbayser@br.ibm.com>
* Update Quantization docs and minor doc fix.
* update readme with latest quants info
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* hotfix: fix xpu crash brought by code refine. torch.xpu rely on import ipex
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* reable gemma2 in xpu
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix in regression in ipex flashattention
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
* Update __init__.py
Fix issue with NoneType comparison for max_input_tokens and sliding_window
- Add default values for max_input_tokens and sliding_window to handle None cases.
- Ensure the comparison between max_input_tokens and sliding_window is handled correctly to prevent TypeError.
- This change addresses the error: TypeError: '<=' not supported between instances of 'int' and 'NoneType'.
* Update __init__.py
Handle NoneType in sliding_window comparison to fix TypeError in __init__.py by ensuring the comparison logic accounts for NoneType values, preventing errors and improving code robustness.
* fix: syntax/style tweak
---------
Co-authored-by: Praz <prazanth2006@gmail.com>
* add gptj modeling
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix: update docs for model addition
* fix: adjust syntax typo
* fix: adjust syntax typo again
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
* feat: implement a templated endpoint for visibility into chat requests
* feat: improve to tokenize too
* fix: adjust return type
* feat: simplify prepare_chat_input logic and adjust start stop chars
* fix: attempt forward on flash attn2 to check hardware support
* fix: warn window_size_left when using flash attn 1
* fix: prefer version check over test op and avoid window_size_left if not flash attn2
* fix: improve condtional and error message
* fix: update sliding window conditional
* fix: simplify changes and revert model changes
* fix: avoid changing conditional
* fix: typo tweak
- Always return the hidden states.
- Create the output tensor inside the `attention` and `paged_attention`
functions.
This removes the difference between how the output is handled between
attention (output parameter) and paged attention (return value). This
also removes the assumption that the attention implementation can
write to an output tensor (in preparation of FlashInfer).
* Fix cache block size for flash decoding
This seems to have been accidentally dropped during the TRT-LLM
PR rebase.
* Also run CI on changes to `backends`
The `GPTWeightLoader` was structured like this in pseudocode:
if marlin:
Set up tensors in a way that GPTQ-Marlin expects
else:
Set up tensors in a way that ExLlama/GPTQ/AWQ expect
However, the GPT-Marlin implementation details should really be in the
`marlin` module. So move the former part out to a separate
`GPTQMarlinWeightsLoader`.
* wip
wip
refacto
refacto
Initial setup for CXX binding to TRTLLM
Working FFI call for TGI and TRTLLM backend
Remove unused parameters annd force tokenizer name to be set
Overall build TRTLLM and deps through CMake build system
Enable end to end CMake build
First version loading engines and making it ready for inference
Remembering to check how we can detect support for chunked context
Move to latest TensorRT-LLM version
Specify which default log level to use depending on CMake build type
make leader executor mode working
unconditionally call InitializeBackend on the FFI layer
bind to CUDA::nvml to retrieve compute capabilities at runtime
updated logic and comment to detect cuda compute capabilities
implement the Stream method to send new tokens through a callback
use spdlog release 1.14.1 moving forward
update trtllm to latest version a96cccafcf6365c128f004f779160951f8c0801c
correctly tell cmake to build dependent tensorrt-llm required libraries
create cmake install target to put everything relevant in installation folder
add auth_token CLI argument to provide hf hub authentification token
allow converting huggingface::tokenizers error to TensorRtLlmBackendError
use correct include for spdlog
include guard to build example in cmakelists
working setup of the ffi layer
remove fmt import
use external fmt lib
end to end ffi flow working
make sure to track include/ffi.h to trigger rebuild from cargo
impl the rust backend which currently cannot move the actual computation in background thread
expose shutdown function at ffi layer
impl RwLock scenario for TensorRtLllmBackend
oops missing c++ backend definitions
compute the number of maximum new tokens for each request independently
make sure the context is not dropped in the middle of the async decoding.
remove unnecessary log
add all the necessary plumbery to return the generated content
update invalid doc in cpp file
correctly forward back the log probabilities
remove unneeded scope variable for now
refactor Stream impl for Generation to factorise code
expose the internal missing start/queue timestamp
forward tgi parameters rep/freq penalty
add some more validation about grammar not supported
define a shared struct to hold the result of a decoding step
expose information about potential error happening while decoding
remove logging
add logging in case of decoding error
make sure executor_worker is provided
add initial Dockerfile for TRTLLM backend
add some more information in CMakeLists.txt to correctly install executorWorker
add some more information in CMakeLists.txt to correctly find and install nvrtc wrapper
simplify prebuilt trtllm libraries name definition
do the same name definition stuff for tensorrt_llm_executor_static
leverage pkg-config to probe libraries paths and reuse new install structure from cmake
fix bad copy/past missing nvinfer linkage direction
align all the linker search dependency
add missing pkgconfig folder for MPI in Dockerfile
correctly setup linking search path for runtime layer
fix missing / before tgi lib path
adding missing ld_library_path for cuda stubs in Dockerfile
update tgi entrypoint
commenting out Python part for TensorRT installation
refactored docker image
move to TensorRT-LLM v0.11.0
make docker linter happy with same capitalization rule
fix typo
refactor the compute capabilities detection along with num gpus
update TensorRT-LLM to latest version
update TensorRT install script to latest
update build.rs to link to cuda 12.5
add missing dependant libraries for linking
clean up a bit
install to decoder_attention target
add some custom stuff for nccl linkage
fix envvar CARGO_CFG_TARGET_ARCH set at runtime vs compile time
use std::env::const::ARCH
make sure variable live long enough...
look for cuda 12.5
add some more basic info in README.md
* Rebase.
* Fix autodocs.
* Let's try to enable trtllm backend.
* Ignore backends/v3 by default.
* Fixing client.
* Fix makefile + autodocs.
* Updating the schema thing + redocly.
* Fix trtllm lint.
* Adding pb files ?
* Remove cargo fmt temporarily.
* ?
* Tmp.
* Remove both check + clippy ?
* Backporting telemetry.
* Backporting 457fb0a1
* Remove PB from git.
* Fixing PB with default member backends/client
* update TensorRT-LLM to latest version
* provided None for api_key
* link against libtensorrt_llm and not libtensorrt-llm
---------
Co-authored-by: OlivierDehaene <23298448+OlivierDehaene@users.noreply.github.com>
Co-authored-by: Morgan Funtowicz <morgan@huggingface.co>
* Add API_Key for Auth and conditionally add authorisation for non info/health endpoints.
* change name to info routes
* Fix comment
* convert strings to lowercase for case insensitive comparison
* convert header to string
* fixes and update docs
* update docs again
* revert wrong update
---------
Co-authored-by: Kevin Duffy <kevin.duffy94@gmail.com>