* feat(trtllm): rewrite health to not account for current state
* chore(looper): cleanup a bit more
* feat(post_processing): max_new_tokens is const evaluated now
* chore(ffi):formatting
* feat(trtllm): add stop words handling
# Conflicts:
# backends/trtllm/lib/backend.cpp
* chore(trtllm): create specific parallelconfig factory and logging init methods
* chore(trtllm): define a macro for SizeType cast
* chore(trtllm): use GetParallelConfig
* chore(trtllm): minor refactoring
* chore(trtllm): validate there are enough GPus on the system for the desired model
* chore(trtllm): ensure max throughput scheduling policy is selected
* chore(trtllm): minor fix
* chore(router): minor refactorings
* feat(docker): build with-slurm ompi
* feat(docker): add python3.10 dev to runtime deps
* chore(docker): add mpi to ld_library_path
* chore(docker): install transformers
* feat(trtllm): detect stop_words from generation_config.json
* (backend) use parking_lot crate for RwLock fairness
# Conflicts:
# backends/trtllm/src/backend.rs
* (launcher) default new server::run parameters to false for now
* (chore) fmt ... why?
* (ffi) use const for GetSamplingConfig
* (server) expose new SchedulingError
* (trt)
* (build) setup ccache if available
* (ffi) add max_new_tokens parameters
* (backend) cleanup a bit
* (backend) expose PullNewTokens
* (ffi) cleanup again
* (ffi) add missing headers imports
* (ffi) add template specialization to catch and convert to Rust Result<T, tensorrt_llm::common::TllmException>
* (looper) new looper initial implementation
* (ffi) remove narrowing type warning
* (ffi) encode the provided user prompt within each request thread
* (misc) change scope identifiers
* (backend) implement the post_processor background thread
* (misc) missing Result types for Rust
* use blocking_recv in looper to consume awaiting_requests at max before pulling in a single step
* (server) forward auth_token to server::run
* (build) fetchcontent use archives instead of git
* (ffi) fix usage of wrong vector constructor making a capacity fill call
* (ffi) missing namespace for tle::Response
* (ffi) do not use reference capture in lambda as we are not capturing anything
* (backend) refactor & cleanup
* (Dockerfile.trtllm) delete for now
* (misc) simplify [make_]move_iterator by using c++20 type inference
* (misc) no need to move for uint32_t items
* (scheduler) rework submit/pull logic
* (post) impl postprocessing
* (misc) delete backend.rs
* (misc) rerun-if-changed all the cmake modules
* (misc) move to latest trtllm
* (fix): HOPPER_SM_MAJOR is 9 not 8
* (misc: build for sm_{75,80,86,89,90} by default
* (misc): build with trtllm 0.13.0
* (misc): increase verbosity of spdlog
* (fix): do not recreate the stateful hashmap at every it
* (misc): update dependency in trtllm dockerfile
* (misc): update dependency in trtllm dockerfile
* (misc): disable logging in release mode
* (misc): improve trtllm download script robustness
* (fix): ore fixes for Dockerfile
* misc(cuda): require 12.6
* chore(cmake): use correct policy for download_timestamp
* feat(looper): check engine and executorWorker paths exist before creating the backend
* chore(cmake): download timestamp should be before URL
* feat(looper): minor optimizations to avoid growing too much the containers
* chore(trtllm): move dockerfile to right place
* chore(trtllm): disable tokenizer parallelism by default
* chore(trtllm): fmt
* chore(trtllm): post-rebase commit
* chore(trtllm): remove unused method
* feat(trtllm): cache maxNumTokens to avoid calling JSON everytime
* misc(router): remove SchedulingError
* feat(trtllm): do not tokenize twice
* Revert "chore(trtllm): remove unused method"
This reverts commit 31747163
* chore(rebase): fix invalid references
* chore(router): add python dependency
* Lint.
* Fix bad rebase
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
* Add support for FP8 KV cache scales
Since FP8 only has limited dynamic range, we can scale keys/values
before storing them into the cache (and unscale them in attention). To
avoid rescaling the cache as the absmax values change, good scales are
usually determined per layer using calibration calibration data and stored
in the checkpoint.
This change adds support for for using key-value scales and loading them
from checkpoints in the two most common formats:
- Separate per-layer `k_scale` and `v_scale` scalars.
- Per-layer `kv_scale` scalar (older format).
Currently, scales are only used with an `float8_e4m3fn` cache.
Besides adding support for key/value scales, the `fp8_quantize` function
is also extended to support quantization with a kernel vendored from
vLLM. This is slightly faster than the PyTorch implementation, but also
scales in FP32, potentially improving accuracy.
* Update FP8 KV cache test to use checkpoint with scales
* `can_scale`: check that the attention is flashinfer
* Add `impureWithCuda` dev shell
This shell is handy when developing some kernels jointly with TGI - it
adds nvcc and a bunch of commonly-used CUDA libraries to the environment.
We don't add this to the normal impure shell to keep the development
environment as clean as possible (avoid accidental dependencies, etc.).
* Add cuDNN
Update the Mixtral GPTQ test to use a model with `desc_act=true` and
`group_size!=-1` to ensure that we are checking activation
sorting/non-full K (with tensor parallelism). The `desc_act=false` case
is already checked by the Mixtral AWQ test.
Change `fp8_quantize` so that we can pass around reciprocals everywhere,
so scales are always passed around in the checkpoint format.
I also noticed that we ignore any input scales that we might have when
fbgemm is available. Skip this path if we already have a scale.
* add gptq and awq int4 support in intel platform
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix ci failure
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* set kv cache dtype
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* refine the code according to the review command
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Simplifying conditionals + reverting integration tests values.
* Unused import
* Fix redundant import.
* Revert change after rebase.
* Upgrading the tests (TP>1 fix changes to use different kernels.)
* Update server/text_generation_server/layers/gptq/__init__.py
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
tgi-entrypoint: exec instead of spawning a child process
reason: otherwise parent will receive the signals when we'd like tgi to receive them
keeping the parent/child is not necessary and would require the parent to handle signals to forward them properly to the child
Signed-off-by: Raphael Glon <oOraph@users.noreply.github.com>
Co-authored-by: Raphael Glon <oOraph@users.noreply.github.com>
* Simplify the `attention` function
- Use one definition rather than multiple.
- Add `key`/`value` arguments, so that we don't need the
`PREFILL_IN_KVCACHE` constant.
- Make it kwargs-only (to avoid mixing up the various `Tensor` args).
* Fixup flashinfer support
As spotted by @philschmid, the payload was compliant with Vertex AI, but
just partially, since ideally the most compliant version would be with
the generation kwargs flattened to be on the same level as the
`messages`; meaning that Vertex AI would still expect a list of
instances, but each instance would be an OpenAI-compatible instance,
which is more clear; and more aligned with the SageMaker integration
too, so kudos to him for spotting that; and sorry from my end for any
inconvenience @Narsil.
XPU backend is available natively (without IPEX) in pytorch starting
from pytorch 2.4. This commit extends TGI to cover the case when user
has XPU support thru pytorch 2.4, but does not have IPEX installed.
Models which don't require attention can work. For attention required
models more work is needed to provide attention implementation.
Tested with the following models:
* teknium/OpenHermes-2.5-Mistral-7B
* bigscience/bloom-560m
* google/gemma-7b
* google/flan-t5-xxl
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* break when there's nothing to read
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Different approach, only listen on stdin when `LOG_LEVEL=debug` (which
is where dropping to a debugger is important).
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
* Small improvements for docs
* Update _toctree.yml
* Updating the doc (we keep the list actually).
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
* feat: process token stream before returning to client
* fix: expect content in test
* fix: improve comparison via ruff lint
* fix: return event in all cases
* fix: always send event on error, avoid unwraps, refactor and improve tests
* fix: prefer no_tool over notify_error to improve reponse
* fix: adjust chat input test for no_tool
* fix: adjust test expected content
---------
Co-authored-by: System administrator <root@ip-10-90-0-186.ec2.internal>
To make sure that everything is formatted with the same black version
as CI.
I sometimes use isort for new files to get nicely ordered imports,
so add it as well. Also set the isort configuration to format in a
way that is compatible with black.
* Add basic FP8 KV cache support
This change adds rudimentary FP8 KV cache support. The support is
enabled by passing `--kv-cache-dtype fp8_e5m2` to the launcher. Doing so
uses this type for the KV cache. However support is still limited:
* Only the `fp8_e5m2` type is supported.
* The KV cache layout is the same as `float16`/`bfloat16` (HND).
* The FP8 KV cache is only supported for FlashInfer.
* Loading of scales is not yet supported.
* Fix Cargo.toml
* feat: unroll notify_error if no tool is choosen
* fix: expect simple message when no tool is selected
* fix: improve test to avoid notify_error
* fix: improve docs and indicate change in expected response
* fix: adjust linting in test file