Commit Graph

1089 Commits

Author SHA1 Message Date
OlivierDehaene 611e21cb13
fix(server): Fix stop sequences (#11) 2022-12-16 16:03:39 +01:00
OlivierDehaene 3e2e6240b8
feat(launcher): Add integration tests (#9) 2022-12-16 11:29:36 +01:00
OlivierDehaene 32a253063d
feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
OlivierDehaene 718096f695
feat: Support stop sequences (#7) 2022-12-12 18:25:22 +01:00
OlivierDehaene 042180d88f fix(server): Only pad to multiple of 8 on GPUs 2022-12-08 19:37:37 +01:00
OlivierDehaene a2985036aa
feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
Nick Hill 31d76e238d
fix(batching): Avoid theoretical hang in batcher loop (#5)
- Avoid theoretical hang in batcher loop
- Avoid a couple of clones in the router generate method
- Keep attention mask tensors as integers
- Remove num_heads attribute

Co-authored-by: OlivierDehaene <Olivier.dehaene@gmail.com>
2022-12-05 10:10:59 +01:00
OlivierDehaene daa1d81d5e
feat(server): Support Galactica (#4) 2022-12-01 19:31:54 +01:00
OlivierDehaene d6d5b12e03 fix(router): Handle tokenizer errors 2022-11-14 17:15:19 +01:00
OlivierDehaene feb7806ca4 fix(readme): Typo 2022-11-14 16:22:10 +01:00
OlivierDehaene 91f5f86280 fix(router): Fix HTTP status codes 2022-11-14 14:34:15 +01:00
OlivierDehaene 6c781025ae feat(rust): Update to 1.65 2022-11-14 13:59:56 +01:00
OlivierDehaene dccd5c2b1a feat(server): Clarify CausalLMBatch concatenate method 2022-11-09 18:24:07 +01:00
OlivierDehaene fa43fb71be fix(server): Fix Transformers fork version 2022-11-08 17:42:38 +01:00
OlivierDehaene 4236e41b0d feat(server): Improved doc 2022-11-07 12:53:56 +01:00
OlivierDehaene cea6051eff feat(launcher): Pass CUDA_VISIBLE_DEVICES to the shard 2022-11-04 18:31:08 +01:00
OlivierDehaene 427d7cc444 feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
OlivierDehaene c5665f5c8b feat(server): Support generic AutoModelForCausalLM 2022-11-04 14:22:47 +01:00
OlivierDehaene 755fc0e403 fix(models): Revert buggy support for AutoModel 2022-11-03 16:07:54 +01:00
OlivierDehaene b3b7ea0d74 feat: Use json formatter by default in docker image 2022-11-02 17:29:56 +01:00
OlivierDehaene 3cf6368c77 feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
OlivierDehaene 09674e6df9 feat(server): Support bitsandbytes 2022-10-27 14:25:29 +02:00
OlivierDehaene beb552127a feat(client): Simplify sharded logic 2022-10-22 23:40:05 +02:00
Nicolas Patry c8ce9b2515
feat(server): Use safetensors
Co-authored-by: OlivierDehaene <23298448+OlivierDehaene@users.noreply.github.com>
2022-10-22 20:00:15 +02:00
Thomas Wang be8827fe41
Create LICENSE (#2) 2022-10-22 10:44:52 +02:00
OlivierDehaene c837893370 feat(router): Add max_waiting_tokens 2022-10-21 16:40:05 +02:00
OlivierDehaene 895a341d06 fix(validation): Fix error messages 2022-10-21 10:59:15 +02:00
Olivier Dehaene f16f2f5ae1 v0.1.0 2022-10-20 19:14:44 +02:00
Olivier Dehaene 92c1ecd008 feat: Add arguments to CLI 2022-10-17 18:27:33 +02:00
Olivier Dehaene 5e5d8766a2 feat: Improve error handling 2022-10-17 14:59:00 +02:00
Olivier Dehaene 00e6ce44b1 Update aml deployment 2022-10-17 10:39:59 +02:00
Olivier Dehaene bcb53903b8 feat: Add AML deployment 2022-10-15 20:21:50 +02:00
Olivier Dehaene bf99afe916 feat: Docker image 2022-10-14 15:56:21 +02:00
Olivier Dehaene 39df4d9975 Use axum 2022-10-11 18:14:39 +02:00
Olivier Dehaene e86ecbac63 ValidationError was not correctly handled 2022-10-11 16:53:40 +02:00
Olivier Dehaene 4c693e6524 Refactored gRPC interface
Added validation logic
2022-10-11 16:50:54 +02:00
Olivier Dehaene fa9a088467 Add load testing 2022-10-11 10:36:51 +02:00
Olivier Dehaene 1d986983d5 fix: cleanup 2022-10-08 12:34:25 +02:00
Olivier Dehaene 295831a481 Init 2022-10-08 12:30:12 +02:00