1da07e85aa
# What does this PR do? This adds a non flash version of MPT. Flash is harder because we need to create a bias ready cuda kernel of flash attention. Fixes https://github.com/huggingface/text-generation-inference/issues/361 Fixes https://github.com/huggingface/text-generation-inference/issues/491 Fixes https://github.com/huggingface/text-generation-inference/issues/290 |
||
---|---|---|
.. | ||
gptq | ||
__init__.py | ||
convert.py | ||
dist.py | ||
hub.py | ||
layers.py | ||
logits_process.py | ||
tokens.py | ||
watermark.py | ||
weights.py |