local-llm-server/llm_server/llm/vllm/info.py

8 lines
377 B
Python

vllm_info = """<p><strong>Important:</strong> This endpoint is running <a href="https://github.com/chu-tianxiang/vllm-gptq" target="_blank">vllm-gptq</a> and not all Oobabooga parameters are supported.</p>
<strong>Supported Parameters:</strong>
<ul>
<li><kbd>temperature</kbd></li>
<li><kbd>top_p</kbd></li>
<li><kbd>top_k</kbd></li>
<li><kbd>max_new_tokens</kbd></li>
</ul>"""