hf_text-generation-inference/server
drbh 40213c957f
Pali gemma modeling (#1895)
This PR adds paligemma modeling code

Blog post: https://huggingface.co/blog/paligemma
Transformers PR: https://github.com/huggingface/transformers/pull/30814

install the latest changes and run with
```bash
# get the weights
# text-generation-server download-weights gv-hf/PaliGemma-base-224px-hf

# run TGI
text-generation-launcher --model-id gv-hf/PaliGemma-base-224px-hf
```


basic example sending various requests
```python
from huggingface_hub import InferenceClient

client = InferenceClient("http://127.0.0.1:3000")


images = [
    "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/cow_beach_1.png",
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png",
]

prompts = [
    "What animal is in this image?",
    "Name three colors in this image.",
    "What are 10 colors in this image?",
    "Where is the cow standing?",
    "answer en Where is the cow standing?",
    "Is there a bird in the image?",
    "Is ther a cow in the image?",
    "Is there a rabbit in the image?",
    "how many birds are in the image?",
    "how many rabbits are in the image?",
]

for img in images:
    print(f"\nImage: {img.split('/')[-1]}")
    for prompt in prompts:
        inputs = f"![]({img}){prompt}\n"
        json_data = {
            "inputs": inputs,
            "parameters": {
                "max_new_tokens": 30,
                "do_sample": False,
            },
        }
        generated_output = client.text_generation(prompt, max_new_tokens=30, stream=False)
        print([f"{prompt}\n{generated_output}"])

```

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-05-16 06:58:47 +02:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
text_generation_server Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-flash-att-v2 fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
pyproject.toml Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
requirements_cuda.txt Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
requirements_rocm.txt Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev