This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/other/vllm/README.md

15 lines
367 B
Markdown

### Nginx
Make sure your proxies all have a long timeout:
```
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
```
The LLM middleware has a request timeout of 95 so this longer timeout is to avoid any issues.
### Model Preperation
Make sure your model's `tokenizer_config.json` has `4096` set equal to or greater than your token limit.