hf_text-generation-inference/server/text_generation/models
OlivierDehaene fcc2c5fcbf
feat(launcher): Log server stdout (#19)
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
2023-01-05 12:01:23 +01:00
..
__init__.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
bloom.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
causal_lm.py fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13) 2023-01-03 11:07:05 +01:00
galactica.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
model.py fix(batching): Avoid theoretical hang in batcher loop (#5) 2022-12-05 10:10:59 +01:00
seq2seq_lm.py fix(server): Check for device type correctly when determining initial padding (#16) 2022-12-30 19:30:42 +01:00
types.py feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00