Stable diffusion for real-time music generation (web app)
Go to file
Hayk Martiros 6d688cd462 flip button icons to start 2022-12-12 21:45:16 -08:00
components flip button icons to start 2022-12-12 21:45:16 -08:00
pages Save settings 2022-12-12 21:43:34 -08:00
public Improve about page 2022-12-12 17:28:59 -08:00
styles Add first half of about page 2022-11-27 18:23:53 -08:00
.eslintrc.json ignore weird apostrophe escape role 2022-12-05 20:40:05 -08:00
.gitignore Initial commit 2022-11-20 19:20:12 +00:00
LICENSE format license 2022-12-12 18:44:23 -08:00
README.md Update README.md 2022-12-12 19:39:08 -08:00
next.config.js Disable strict mode 2022-11-24 14:53:51 -08:00
package-lock.json tweak 2022-12-12 11:33:42 -08:00
package.json Merge branch 'main' into draft-settingsViewDaisy 2022-12-12 11:29:32 -08:00
postcss.config.js Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
shaders.js Random seed and little tweaks 2022-11-24 23:26:44 -08:00
tailwind.config.js initial daisy implementation for settings 2022-12-11 18:45:48 -08:00
tsconfig.json Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
types.ts hacky but functional panel shifting with alpha changes 2022-11-30 16:30:52 -08:00

README.md

Riffusion App

Riffusion is an app for real-time music generation with stable diffusion.

Read about it at https://www.riffusion.com/about and try it at https://www.riffusion.com/.

This repository contains the interactive web app that powers the website.

It is built with Next.js, React, Typescript, three.js, Tailwind, and Vercel.

Run

This is a Next.js project bootstrapped with create-next-app.

Install:

npm install

Run the development server:

npm run dev
# or
yarn dev

Open http://localhost:3000 with your browser to see the app.

The app home is at pages/index.js. The page auto-updates as you edit the file. The about page is at pages/about.tsx.

The pages/api directory is mapped to /api/*. Files in this directory are treated as API routes instead of React pages.

Inference Server

To actually generate model outputs, we need a model backend that responds to inference requests via API. If you have a large GPU that can run stable diffusion in under five seconds, clone and run the instructions in the inference server to run the Flask app.

This app also has a configuration to run with Baseten for auto-scaling and load balancing. To use BaseTen, you need an API key.

To configure these backends, add a .env.local file:

# URL to your flask instance
RIFFUSION_FLASK_URL=http://localhost:3013/run_inference/

# Whether to use baseten as the model backend
NEXT_PUBLIC_RIFFUSION_USE_BASETEN=false

# If using BaseTen, the URL and API key
RIFFUSION_BASETEN_URL=https://app.baseten.co/applications/XXX
RIFFUSION_BASETEN_API_KEY=XXX