feat(docs): Clarify installation steps (#54)

Adds some bits for first-time users (like me 😄 )
This commit is contained in:
lewtun 2023-02-03 13:07:55 +01:00 committed by GitHub
parent 20c3c5940c
commit a0dca443dd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 21 additions and 4 deletions

View File

@ -93,7 +93,7 @@ curl 127.0.0.1:8080/generate_stream \
-H 'Content-Type: application/json'
```
To use GPUs, you will need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
**Note:** To use GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
### API documentation
@ -102,14 +102,31 @@ The Swagger UI is also available at: [https://huggingface.github.io/text-generat
### Local install
You can also opt to install `text-generation-inference` locally. You will need to have cargo and Python installed on your
machine
You can also opt to install `text-generation-inference` locally.
First [install Rust](https://rustup.rs/) and create a Python virtual environment with at least
Python 3.9, e.g. using `conda`:
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
conda create -n text-generation-inference python=3.9
conda activate text-generation-inference
```
Then run:
```shell
BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels
make run-bloom-560m
```
**Note:** on some machines, you may also need the OpenSSL libraries. On Linux machines, run:
```shell
sudo apt-get install libssl-dev
```
### CUDA Kernels
The custom CUDA kernels are only tested on NVIDIA A100s. If you have any installation or runtime issues, you can remove
@ -153,4 +170,4 @@ make router-dev
```shell
make python-tests
make integration-tests
```
```