From bb83f333b79d1754b11c15471b1aacfd39fcfeb7 Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Wed, 2 Aug 2023 17:40:56 +0300 Subject: [PATCH] Added consuming TGI with ChatUI --- docs/source/basic_tutorials/consuming_tgi.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/docs/source/basic_tutorials/consuming_tgi.md b/docs/source/basic_tutorials/consuming_tgi.md index e69de29b..9cdc9b14 100644 --- a/docs/source/basic_tutorials/consuming_tgi.md +++ b/docs/source/basic_tutorials/consuming_tgi.md @@ -0,0 +1,14 @@ +# Consuming Text Generation Inference + +## ChatUI + +ChatUI is the open-source interface built for large language model serving. It offers many customization options, web search with SERP API and more. ChatUI can automatically consume the Text Generation Inference server, and even provide option to switch between different TGI endpoints. You can try it out at [Hugging Chat](https://huggingface.co/chat/), or use [ChatUI Docker Spaces](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) to deploy your own Hugging Chat to Spaces. + +To serve both ChatUI and TGI in same environment, simply add your own endpoints to the `MODELS` variable in ``.env.local` file inside `chat-ui` repository. Provide the endpoints pointing to where TGI is served. + +``` +{ +// rest of the model config here +"endpoints": [{"url": "https://HOST:PORT/generate_stream"}] +} +``` \ No newline at end of file