hf_text-generation-inference/server/text_generation/models
Nick Hill 686cc66717
fix(server): Check for device type correctly when determining initial padding (#16)
AFAIK there is no torch device type called "gpu".
2022-12-30 19:30:42 +01:00
..
__init__.py feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
bloom.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
causal_lm.py fix(server): Check for device type correctly when determining initial padding (#16) 2022-12-30 19:30:42 +01:00
galactica.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
model.py fix(batching): Avoid theoretical hang in batcher loop (#5) 2022-12-05 10:10:59 +01:00
seq2seq_lm.py fix(server): Check for device type correctly when determining initial padding (#16) 2022-12-30 19:30:42 +01:00
types.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00