Merge pull request #22 from riffusion/readme/env_local_setup
Update readme to simplify .env.local setup
This commit is contained in:
commit
9d14f3032e
12
README.md
12
README.md
|
@ -43,20 +43,10 @@ The `pages/api` directory is mapped to `/api/*`. Files in this directory are tre
|
||||||
|
|
||||||
To actually generate model outputs, we need a model backend that responds to inference requests via API. If you have a large GPU that can run stable diffusion in under five seconds, clone and run the instructions in the [inference server](https://github.com/hmartiro/riffusion-inference) to run the Flask app.
|
To actually generate model outputs, we need a model backend that responds to inference requests via API. If you have a large GPU that can run stable diffusion in under five seconds, clone and run the instructions in the [inference server](https://github.com/hmartiro/riffusion-inference) to run the Flask app.
|
||||||
|
|
||||||
This app also has a configuration to run with [Baseten](https://www.baseten.co/) for auto-scaling and load balancing. To use BaseTen, you need an API key.
|
You will need to add a `.env.local` file in the root of this repository specifying the URL of the inference server:
|
||||||
|
|
||||||
To configure these backends, add a `.env.local` file:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# URL to your flask instance
|
|
||||||
RIFFUSION_FLASK_URL=http://127.0.0.1:3013/run_inference/
|
RIFFUSION_FLASK_URL=http://127.0.0.1:3013/run_inference/
|
||||||
|
|
||||||
# Whether to use baseten as the model backend
|
|
||||||
NEXT_PUBLIC_RIFFUSION_USE_BASETEN=false
|
|
||||||
|
|
||||||
# If using BaseTen, the URL and API key
|
|
||||||
RIFFUSION_BASETEN_URL=https://app.baseten.co/applications/XXX
|
|
||||||
RIFFUSION_BASETEN_API_KEY=XXX
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Citation
|
## Citation
|
||||||
|
|
Loading…
Reference in New Issue