clean up citations
This commit is contained in:
parent
e3e542c121
commit
31e1619a13
|
@ -1,30 +1,40 @@
|
||||||
Everydream 2 trainer is built using various open source technologies and packages.
|
Everydream 2 trainer is built using various open source technologies and packages.
|
||||||
## Stable Diffusion:
|
|
||||||
|
|
||||||
### Predecessors
|
This is not a thorough nor deep list, but is an opinionated list of research that is most proximal to this repo and interesting.
|
||||||
|
|
||||||
Universiteit van Amsterdam - AutoencoderKL from [paper](https://arxiv.org/abs/1312.6114v11)
|
### Stable Diffusion's Predecessors and Components
|
||||||
|
|
||||||
Berkeley - DDPM [DDPM](https://arxiv.org/abs/2006.11239) - [code](https://github.com/hojonathanho/diffusion)
|
AutoencoderKL [paper](https://arxiv.org/abs/1312.6114v11)
|
||||||
|
|
||||||
OpenAI - CLIP (used in SD1.x and SDXL) [Paper](https://arxiv.org/pdf/2103.00020.pdf) - [github](https://github.com/OpenAI/CLIP)
|
DDPM [paper](https://arxiv.org/abs/2006.11239) - [github](https://github.com/hojonathanho/diffusion)
|
||||||
|
|
||||||
LAION OpenClip (used in SD2.x and SDXL) [Announcement](https://laion.ai/blog/large-openclip/) - [github](https://github.com/mlfoundations/open_clip)
|
CLIP [paper](https://arxiv.org/pdf/2103.00020.pdf) - [github](https://github.com/OpenAI/CLIP)
|
||||||
|
|
||||||
### Original Compvis Stable Diffusion (CLIP + LDM + AutoencoderKL)
|
OpenClip [info](https://laion.ai/blog/large-openclip/) - [github](https://github.com/mlfoundations/open_clip)
|
||||||
[paper](https://arxiv.org/abs/2112.10752) - [github](https://github.com/CompVis/stable-diffusion)
|
|
||||||
|
|
||||||
## Captioning models:
|
LAION 5B [paper](https://arxiv.org/abs/2210.08402) - [datasets](https://huggingface.co/laion)
|
||||||
|
|
||||||
|
### Latent Diffusion
|
||||||
|
Latent Diffusion [paper](https://arxiv.org/abs/2112.10752) - [github](https://github.com/CompVis/latent-diffusion) -- Stable Diffusion [github](https://github.com/CompVis/stable-diffusion)
|
||||||
|
|
||||||
|
SDXL [paper](https://arxiv.org/abs/2307.01952) - [github](https://github.com/Stability-AI/generative-models)
|
||||||
|
|
||||||
|
|
||||||
|
### Captioning models
|
||||||
|
|
||||||
Open Flamingo [paper](https://arxiv.org/abs/2308.01390) - [github](https://github.com/mlfoundations/open_flamingo)
|
Open Flamingo [paper](https://arxiv.org/abs/2308.01390) - [github](https://github.com/mlfoundations/open_flamingo)
|
||||||
|
|
||||||
Salesforce BLIP/BLIP2 [blip paper](https://arxiv.org/abs/2201.12086) - [blip2 github (LAVIS)](https://github.com/salesforce/LAVIS) - [blip1 github](https://github.com/salesforce/BLIP)
|
BLIP/BLIP2 [blip paper](https://arxiv.org/abs/2201.12086) - [blip2 github (LAVIS)](https://github.com/salesforce/LAVIS) - [blip1 github](https://github.com/salesforce/BLIP)
|
||||||
|
|
||||||
Kosmos-2 [paper](https://arxiv.org/abs/2306.14824) - [Github](https://github.com/microsoft/unilm/tree/master/kosmos-2) - [Huggingface](https://huggingface.co/microsoft/kosmos-2-patch14-224)
|
Kosmos-2 [paper](https://arxiv.org/abs/2306.14824) - [Github](https://github.com/microsoft/unilm/tree/master/kosmos-2) - [Huggingface](https://huggingface.co/microsoft/kosmos-2-patch14-224)
|
||||||
|
|
||||||
## Optimizers
|
|
||||||
|
|
||||||
Tim Dettmers - 8-bit quantization (adamw8bit, etc) [paper](https://arxiv.org/abs/2110.02861) - [github](https://github.com/TimDettmers/bitsandbytes)
|
### Optimizers
|
||||||
|
|
||||||
Facebook - D-Adaptation [paper](https://arxiv.org/abs/2301.07733) - [github](https://github.com/facebookresearch/dadaptation)
|
Adam [paper](https://arxiv.org/abs/1412.6980)
|
||||||
|
|
||||||
|
8-bit block-wise quantization [paper](https://arxiv.org/abs/2110.02861) - [github](https://github.com/TimDettmers/bitsandbytes)
|
||||||
|
|
||||||
|
D-Adaptation [paper](https://arxiv.org/abs/2301.07733) - [github](https://github.com/facebookresearch/dadaptation)
|
||||||
|
|
||||||
|
DoWG [paper](https://arxiv.org/abs/2305.16284)
|
Loading…
Reference in New Issue