Stable diffusion for real-time music generation (web app)
Go to file
Hayk Martiros 44174e1980
Merge pull request #3 from AK391/main
add colab notebook and gradio demo
2022-12-15 12:21:57 -08:00
components Update PromptEntry.tsx 2022-12-15 00:06:30 -08:00
external moving unmute to external 2022-12-14 00:54:26 -08:00
pages Add a 15s timeout on requests 2022-12-15 08:59:50 -08:00
public Adding samples to about 2022-12-14 17:51:39 -08:00
styles Add first half of about page 2022-11-27 18:23:53 -08:00
.eslintrc.json ignore weird apostrophe escape role 2022-12-05 20:40:05 -08:00
.gitignore unmute tone context on ios 2022-12-14 00:10:23 -08:00
LICENSE format license 2022-12-12 18:44:23 -08:00
README.md add gradio web demo 2022-12-15 15:05:49 -05:00
next.config.js Disable strict mode 2022-11-24 14:53:51 -08:00
package-lock.json tweak 2022-12-12 11:33:42 -08:00
package.json Merge branch 'main' into draft-settingsViewDaisy 2022-12-12 11:29:32 -08:00
postcss.config.js Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
prompts.ts adjust gpu warning 2022-12-14 22:07:47 -08:00
shaders.js Random seed and little tweaks 2022-11-24 23:26:44 -08:00
tailwind.config.js initial daisy implementation for settings 2022-12-11 18:45:48 -08:00
tsconfig.json Set up tailwind and get basic layout 2022-11-20 13:42:23 -08:00
types.ts hacky but functional panel shifting with alpha changes 2022-11-30 16:30:52 -08:00

README.md

Riffusion App

Riffusion is an app for real-time music generation with stable diffusion.

Read about it at https://www.riffusion.com/about and try it at https://www.riffusion.com/.

This repository contains the interactive web app that powers the website.

It is built with Next.js, React, Typescript, three.js, Tailwind, and Vercel.

Run

This is a Next.js project bootstrapped with create-next-app.

Install:

npm install

Run the development server:

npm run dev
# or
yarn dev

Open http://localhost:3000 with your browser to see the app.

The app home is at pages/index.js. The page auto-updates as you edit the file. The about page is at pages/about.tsx.

The pages/api directory is mapped to /api/*. Files in this directory are treated as API routes instead of React pages.

Inference Server

To actually generate model outputs, we need a model backend that responds to inference requests via API. If you have a large GPU that can run stable diffusion in under five seconds, clone and run the instructions in the inference server to run the Flask app.

This app also has a configuration to run with Baseten for auto-scaling and load balancing. To use BaseTen, you need an API key.

To configure these backends, add a .env.local file:

# URL to your flask instance
RIFFUSION_FLASK_URL=http://localhost:3013/run_inference/

# Whether to use baseten as the model backend
NEXT_PUBLIC_RIFFUSION_USE_BASETEN=false

# If using BaseTen, the URL and API key
RIFFUSION_BASETEN_URL=https://app.baseten.co/applications/XXX
RIFFUSION_BASETEN_API_KEY=XXX

Citation

If you build on this work, please cite it as follows:

@software{Forsgren_Martiros_2022,
  author = {Forsgren, Seth* and Martiros, Hayk*},
  title = {{Riffusion - Stable diffusion for real-time music generation}},
  url = {https://riffusion.com/about},
  year = {2022}
}