hf_text-generation-inference/integration-tests/models/__snapshots__/test_mpt
Nicolas Patry 1da07e85aa
feat(server): Add Non flash MPT. (#514)
# What does this PR do?


This adds a non flash version of MPT.
Flash is harder because we need to create a bias ready cuda kernel of
flash attention.

Fixes
https://github.com/huggingface/text-generation-inference/issues/361
Fixes
https://github.com/huggingface/text-generation-inference/issues/491
Fixes
https://github.com/huggingface/text-generation-inference/issues/290
2023-07-03 13:01:46 +02:00
..
test_mpt.json feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00
test_mpt_load.json feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00