This website requires JavaScript.
Explore
Gist
Help
Register
Sign In
cyberes
/
local-llm-server
Watch
1
Star
1
Fork
You've already forked local-llm-server
0
Code
Issues
Pull Requests
Packages
Projects
Releases
1
Wiki
Activity
40ac84aa9a
local-llm-server
/
llm_server
/
routes
History
Cyberes
40ac84aa9a
actually we don't want to emulate openai
2023-09-12 01:04:11 -06:00
..
helpers
caching
2023-08-23 12:40:13 -06:00
v1
actually we don't want to emulate openai
2023-09-12 01:04:11 -06:00
__init__.py
show total output tokens on stats
2023-08-24 20:43:11 -06:00
cache.py
get working with ooba again, give up on dockerfile
2023-09-11 09:51:01 -06:00
queue.py
implement vllm backend
2023-09-11 20:47:19 -06:00
request_handler.py
actually we don't want to emulate openai
2023-09-12 01:04:11 -06:00
stats.py
update readme
2023-08-24 12:19:59 -06:00