1da07e85aa
# What does this PR do? This adds a non flash version of MPT. Flash is harder because we need to create a bias ready cuda kernel of flash attention. Fixes https://github.com/huggingface/text-generation-inference/issues/361 Fixes https://github.com/huggingface/text-generation-inference/issues/491 Fixes https://github.com/huggingface/text-generation-inference/issues/290 |
||
---|---|---|
.. | ||
__snapshots__ | ||
test_bloom_560m.py | ||
test_bloom_560m_sharded.py | ||
test_flash_falcon.py | ||
test_flash_llama.py | ||
test_flash_neox.py | ||
test_flash_neox_sharded.py | ||
test_flash_santacoder.py | ||
test_flash_starcoder.py | ||
test_mpt.py | ||
test_mt0_base.py | ||
test_neox.py | ||
test_neox_sharded.py | ||
test_t5_sharded.py |