Cyberes
|
ee9a0d4858
|
redo config
|
2024-05-07 12:20:53 -06:00 |
Cyberes
|
0059e7956c
|
Merge cluster to master (#3)
Co-authored-by: Cyberes <cyberes@evulid.cc>
Reviewed-on: #3
|
2023-10-27 19:19:22 -06:00 |
Cyberes
|
347a82b7e1
|
avoid sending to backend to tokenize if it's greater than our specified context size
|
2023-09-28 03:54:20 -06:00 |
Cyberes
|
957a6cd092
|
fix error handling
|
2023-09-27 14:36:49 -06:00 |
Cyberes
|
aba2e5b9c0
|
don't use db pooling, add LLM-ST-Errors header to disable formatted errors
|
2023-09-26 23:59:22 -06:00 |
Cyberes
|
d9bbcc42e6
|
more work on openai endpoint
|
2023-09-26 22:09:11 -06:00 |
Cyberes
|
11e84db59c
|
update database, tokenizer handle null prompt, convert top_p to vllm on openai, actually validate prompt on streaming,
|
2023-09-25 22:32:48 -06:00 |
Cyberes
|
8240a1ebbb
|
fix background log not doing anything
|
2023-09-25 18:18:29 -06:00 |
Cyberes
|
81452ec643
|
adjust vllm info
|
2023-09-21 20:13:29 -06:00 |
Cyberes
|
03e3ec5490
|
port to mysql, use vllm tokenizer endpoint
|
2023-09-20 20:30:31 -06:00 |