{{ analytics_tracking_code|safe }}

{{ llm_middleware_name }}

Current Model: {{ default_model }}

Estimated Wait Time: {{ default_estimated_wait }}
Processing: {{ default_active_gen_workers }}
Queued: {{ default_proompters_in_queue }}


Client API URL: {{ client_api }}

Streaming API URL: {{ ws_client_api if enable_streaming else 'Disabled' }}

OpenAI-Compatible API URL: {{ openai_client_api }}

{% if info_html|length > 1 %}
{{ info_html|safe }} {% endif %}

Instructions

  1. In Settings > Power User Options, enable Relaxed API URLS.
  2. Set your API type to {{ mode_name }}
  3. Enter {{ client_api }} in the {{ api_input_textbox }} textbox.
  4. {% if enable_streaming %}
  5. Enter {{ ws_client_api }} in the {{ streaming_input_textbox }} textbox.
  6. {% endif %}
  7. If you have a token, check the Mancer AI checkbox and enter your token in the Mancer API key textbox.
  8. Click Connect to test the connection.
  9. Open your preset config and set Context Size to {{ default_context_size }}.
  10. Follow this guide to get set up: rentry.org/freellamas
{% if openai_client_api != 'disabled' and expose_openai_system_prompt %}
OpenAI-Compatible API

The OpenAI-compatible API adds a system prompt to set the AI's behavior to a "helpful assistant". You can view this prompt here.

{% endif %}
{{ extra_info|safe }}

Statistics

Proompters:

{% for key, value in model_choices.items() %}

{{ key }} - {{ value.backend_count }} {% if value.backend_count == 1 %} worker{% else %}workers{% endif %}

{% if value.estimated_wait == 0 and value.estimated_wait >= value.concurrent_gens %} {# There will be a wait if the queue is empty but prompts are processing, but we don't know how long. #} {% set estimated_wait_sec = "less than " + value.estimated_wait|int|string + " seconds" %} {% else %} {% set estimated_wait_sec = value.estimated_wait|int|string + " seconds" %} {% endif %}

Estimated Wait Time: {{ estimated_wait_sec }}
Processing: {{ value.processing }}
Queued: {{ value.queued }}

Client API URL: {{ value.client_api }}
Streaming API URL: {{ value.ws_client_api }}
OpenAI-Compatible API URL: {{ value.openai_client_api }}

Context Size: {{ value.context_size }}

Average Generation Time: {{ value.avg_generation_time | int }} seconds


{% endfor %}