hf_text-generation-inference/integration-tests
Nicolas Patry 1da07e85aa
feat(server): Add Non flash MPT. (#514)
# What does this PR do?


This adds a non flash version of MPT.
Flash is harder because we need to create a bias ready cuda kernel of
flash attention.

Fixes
https://github.com/huggingface/text-generation-inference/issues/361
Fixes
https://github.com/huggingface/text-generation-inference/issues/491
Fixes
https://github.com/huggingface/text-generation-inference/issues/290
2023-07-03 13:01:46 +02:00
..
models feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00
conftest.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
pytest.ini feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
requirements.txt