This website requires JavaScript.
Explore
Gist
Help
Register
Sign In
cyberes
/
local-llm-server
Archived
Watch
1
Star
1
Fork
You've already forked local-llm-server
0
Code
Issues
Pull Requests
Packages
Projects
Releases
1
Wiki
Activity
This repository has been archived on
2024-10-27
. You can view files and clone it, but cannot push or open issues or pull requests.
0771c2325c
local-llm-server
/
llm_server
/
llm
History
Cyberes
0771c2325c
fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling
2023-10-23 17:24:20 -06:00
..
oobabooga
fix issues with queue and streaming
2023-10-15 20:45:01 -06:00
openai
fix openai confusion
2023-10-11 12:50:20 -06:00
vllm
fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling
2023-10-23 17:24:20 -06:00
__init__.py
fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling
2023-10-23 17:24:20 -06:00
generator.py
finish openai endpoints
2023-10-01 16:04:53 -06:00
info.py
functional
2023-09-30 19:41:50 -06:00
llm_backend.py
trying to fix workers still processing after backend goes offline
2023-10-15 15:11:37 -06:00