24 lines
1.5 KiB
Plaintext
24 lines
1.5 KiB
Plaintext
# Stochastic Karras VE
|
|
|
|
## Overview
|
|
|
|
[Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
|
|
|
|
The abstract of the paper is the following:
|
|
|
|
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
|
|
|
|
This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.
|
|
|
|
|
|
## Available Pipelines:
|
|
|
|
| Pipeline | Tasks | Colab
|
|
|---|---|:---:|
|
|
| [pipeline_stochastic_karras_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py) | *Unconditional Image Generation* | - |
|
|
|
|
|
|
## KarrasVePipeline
|
|
[[autodoc]] KarrasVePipeline
|
|
- __call__
|