Stable diffusion for real-time music generation (web app)
Go to file
Hayk Martiros bd569140b9 Add license 2022-12-12 18:42:15 -08:00
components Merge branch 'main' of github.com:hmartiro/riffusion-app 2022-12-12 18:23:06 -08:00
pages remove print 2022-12-12 18:36:37 -08:00
public Improve about page 2022-12-12 17:28:59 -08:00
styles Add first half of about page 2022-11-27 18:23:53 -08:00
.eslintrc.json ignore weird apostrophe escape role 2022-12-05 20:40:05 -08:00
.gitignore Initial commit 2022-11-20 19:20:12 +00:00
LICENSE Add license 2022-12-12 18:42:15 -08:00
README.md Environment variables for model inference and updated readme 2022-12-12 18:23:03 -08:00
next.config.js Disable strict mode 2022-11-24 14:53:51 -08:00
package-lock.json tweak 2022-12-12 11:33:42 -08:00
package.json Merge branch 'main' into draft-settingsViewDaisy 2022-12-12 11:29:32 -08:00
postcss.config.js Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
shaders.js Random seed and little tweaks 2022-11-24 23:26:44 -08:00
tailwind.config.js initial daisy implementation for settings 2022-12-11 18:45:48 -08:00
tsconfig.json Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
types.ts hacky but functional panel shifting with alpha changes 2022-11-30 16:30:52 -08:00

README.md

Riffusion App

Riffusion generates audio using stable diffusion. See https://www.riffusion.com/about for details.

Run

This is a Next.js project bootstrapped with create-next-app.

Install:

npm install

Run the development server:

npm run dev
# or
yarn dev

Open http://localhost:3000 with your browser to see the app.

The app home is at pages/index.js. The page auto-updates as you edit the file. The about page is at pages/about.tsx.

The pages/api directory is mapped to /api/*. Files in this directory are treated as API routes instead of React pages.

Inference Server

To actually generate model outputs, we need a model backend that responds to inference requests via API. If you have a large GPU that can run stable diffusion in under five seconds, clone and run the instructions in the inference server to run the Flask app.

This app also has a configuration to run with BaseTen for auto-scaling and load balancing. To use BaseTen, you need an API key.

To configure these backends, add a .env.local file:

# URL to your flask instance
RIFFUSION_FLASK_URL=http://localhost:3013/run_inference/

# Whether to use baseten as the model backend
NEXT_PUBLIC_RIFFUSION_USE_BASETEN=false

# If using BaseTen, the URL and API key
RIFFUSION_BASETEN_URL=https://app.baseten.co/applications/XXX
RIFFUSION_BASETEN_API_KEY=XXX