Commit Graph

782 Commits

Author SHA1 Message Date
OlivierDehaene 5437d49beb
feat(router): add max_total_tokens and empty_input validation (#68)
closes #65
2023-02-15 21:56:59 +01:00
OlivierDehaene 68455353f5
feat(launcher): add disable_custom_kernels arg (#67) 2023-02-15 16:23:45 +01:00
OlivierDehaene c5a4a1faf3
feat(server): improve download logging (#66) 2023-02-15 16:11:32 +01:00
OlivierDehaene 0fbc691946
feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
OlivierDehaene 9af454142a
feat: add distributed tracing (#62) 2023-02-13 13:02:45 +01:00
Yannic Kilcher e520d5b349
fixed SSE naming (#61)
https://en.wikipedia.org/wiki/Server-sent_events
2023-02-08 22:30:11 +01:00
OlivierDehaene 1ad3250b89
fix(docker): increase shm size (#60) 2023-02-08 17:53:33 +01:00
OlivierDehaene c503a639b1
feat(server): support t5 (#59) 2023-02-07 18:25:17 +01:00
OlivierDehaene 2fe5e1b30e
V0.2.1 (#58) 2023-02-07 15:40:25 +01:00
OlivierDehaene 4acc42a605
fix(server): better handling of inference mode (#57) 2023-02-07 15:38:22 +01:00
OlivierDehaene e114d87486
feat(ci): push to AML registry (#56) 2023-02-06 14:33:56 +01:00
lewtun a0dca443dd
feat(docs): Clarify installation steps (#54)
Adds some bits for first-time users (like me 😄 )
2023-02-03 13:07:55 +01:00
OlivierDehaene 20c3c5940c
feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
OlivierDehaene b1482d9048
breaking(router): modify /generate API to only return generated text (#50)
@njhill, @yk FYI

generated_text was concatenated to the user prompt for legacy reason. We
want to remove this behaviour as we don't think it is useful and even
detrimonial to usability.

We also remove the unused Vec.
2023-02-02 15:02:04 +01:00
OlivierDehaene 7b870e1e18
feat(router): use background task to manage request queue (#52)
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
2023-02-02 14:59:27 +01:00
OlivierDehaene df227ac20d
fix(server): allow greedy repetition penalty (#51) 2023-02-02 10:34:35 +01:00
OlivierDehaene 775115e3a5
feat(server): allow the server to use a local weight cache (#49) 2023-02-01 16:22:10 +01:00
OlivierDehaene 313194f6d7
feat(server): support repetition penalty (#47) 2023-02-01 15:58:42 +01:00
OlivierDehaene 2ad895a6cc
feat(server): allow gpt-neox models with odd vocab sizes to be sharded (#48) 2023-02-01 14:43:59 +01:00
OlivierDehaene 404ed7a1f6
feat(ci): Docker build and push (#46) 2023-01-31 20:14:05 +01:00
OlivierDehaene f830706b21
feat(server): Support GPT-Neox (#39) 2023-01-31 18:53:56 +01:00
OlivierDehaene c6e8b9442b
fix(server): fix quantization for sharded models (#45) 2023-01-31 17:40:38 +01:00
OlivierDehaene 017a2a8c2f
feat: Add token streaming using ServerSideEvents support (#41) 2023-01-31 17:04:00 +01:00
OlivierDehaene 54fec93193
fix(server): fix seeding with multiple shards (#44) 2023-01-31 16:01:15 +01:00
OlivierDehaene 03bdf18290
fix(server): fix seeding on gpu (#42) 2023-01-31 14:30:33 +01:00
OlivierDehaene 4f9ac67cfa
Revert "feat: Add token streaming using ServerSideEvents support" (#40)
Reverts huggingface/text-generation-inference#36
2023-01-31 14:21:51 +01:00
OlivierDehaene 7fbfbb0dc5
feat: Add token streaming using ServerSideEvents support (#36)
Add token streaming using ServerSideEvents (SSE).

The signature of the SSE events is: 

```rust
struct Details {
    finish_reason: String,
    generated_tokens: u32,
    seed: Option<u64>,
}

struct StreamResponse {
    token: Token,
    generated_text: Option<String>,
    details: Option<Details>,
}

struct ErrorResponse {
    error: String,
}
```
2023-01-31 11:49:43 +01:00
OlivierDehaene cd298bc5e5
feat: Support sampling seeding (#37)
Co-authored-by: Yannic Kilcher <yk@users.noreply.github.com>
2023-01-30 15:36:16 +01:00
OlivierDehaene 1539d3cbbe
feat(router): Remove second lock from batcher hot path (#27)
@njhill
2023-01-26 16:29:13 +01:00
OlivierDehaene ce960be0a5
feat(bloom): use torch.nn.Linear and torch.nn.GELU (#33) 2023-01-26 15:33:45 +01:00
OlivierDehaene 13e7044ab7
fix(dockerfile): fix docker build (#32) 2023-01-24 19:52:39 +01:00
OlivierDehaene 5c01e2544c
fix(router): fix api-inference deployment (#31) 2023-01-23 17:42:14 +01:00
OlivierDehaene ab2ad91da3
fix(docker): fix api-inference deployment (#30) 2023-01-23 17:33:08 +01:00
OlivierDehaene f9d0ec376a
feat(docker): Make the image compatible with api-inference (#29) 2023-01-23 17:11:27 +01:00
OlivierDehaene 1f570d181f
fix(server): Fix position ids (#28) 2023-01-20 15:35:22 +01:00
OlivierDehaene 15511edc01
feat(server): Support SantaCoder (#26) 2023-01-20 12:24:39 +01:00
Nick Hill f7ac394935
fix(router): Obey max batch size (#23) 2023-01-17 09:11:21 +01:00
Nick Hill e6d3eb5d5d
fix(server): Minor refactorization using new_zeros (#24)
- Fix some type hints, in particular base tokenizer class
- Make use of `tensor.new_zero/empty` methods
- Simplify env var string parsing in launcher
2023-01-17 09:10:22 +01:00
OlivierDehaene fcc2c5fcbf
feat(launcher): Log server stdout (#19)
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
2023-01-05 12:01:23 +01:00
Nicolas Patry b94f30215f
fix(server): Use cleanup_tokenization_spaces=False for lossless decoding (#13)
Fixes #12 in the easiest way I could think of.
2023-01-03 11:07:05 +01:00
Nick Hill 60472f9d2b
feat(router): Add const parameters to validation logic (#15)
I noticed some opportunity to collapse some of the logic, in case you
are interested.
2023-01-03 10:41:22 +01:00
Nick Hill 3efa5bbbfd
fix(router): Include special tokens when tokenizing (#14)
There's currently a discrepancy in the tokenization between the router
and python server code. The latter includes special tokens but former
does not.

This results in a token count mismatch for seq2seq models such as mt0
where the tokenizer emits an EOS token at the end.

This in turn results in some unexpected/incorrect output, in particular
when batch concatenation is involved, because the python code uses the
input length passed from the router for each row.

As far as I can tell, it is better to include this token in the encoder
`input_ids`, so I guess it's best to just adjust on the router side.
2022-12-30 19:31:44 +01:00
Nick Hill 686cc66717
fix(server): Check for device type correctly when determining initial padding (#16)
AFAIK there is no torch device type called "gpu".
2022-12-30 19:30:42 +01:00
OlivierDehaene 611e21cb13
fix(server): Fix stop sequences (#11) 2022-12-16 16:03:39 +01:00
OlivierDehaene 3e2e6240b8
feat(launcher): Add integration tests (#9) 2022-12-16 11:29:36 +01:00
OlivierDehaene 32a253063d
feat: Return logprobs (#8) 2022-12-15 17:03:56 +01:00
OlivierDehaene 718096f695
feat: Support stop sequences (#7) 2022-12-12 18:25:22 +01:00
OlivierDehaene 042180d88f fix(server): Only pad to multiple of 8 on GPUs 2022-12-08 19:37:37 +01:00
OlivierDehaene a2985036aa
feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
Nick Hill 31d76e238d
fix(batching): Avoid theoretical hang in batcher loop (#5)
- Avoid theoretical hang in batcher loop
- Avoid a couple of clones in the router generate method
- Keep attention mask tensors as integers
- Remove num_heads attribute

Co-authored-by: OlivierDehaene <Olivier.dehaene@gmail.com>
2022-12-05 10:10:59 +01:00