From 31e1619a137ea59900dbb602df14696843ecea8d Mon Sep 17 00:00:00 2001 From: Victor Hall Date: Fri, 17 Nov 2023 13:39:12 -0500 Subject: [PATCH] clean up citations --- doc/CITATIONS.md | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/doc/CITATIONS.md b/doc/CITATIONS.md index 525e198..6a090f2 100644 --- a/doc/CITATIONS.md +++ b/doc/CITATIONS.md @@ -1,30 +1,40 @@ Everydream 2 trainer is built using various open source technologies and packages. -## Stable Diffusion: -### Predecessors +This is not a thorough nor deep list, but is an opinionated list of research that is most proximal to this repo and interesting. -Universiteit van Amsterdam - AutoencoderKL from [paper](https://arxiv.org/abs/1312.6114v11) +### Stable Diffusion's Predecessors and Components -Berkeley - DDPM [DDPM](https://arxiv.org/abs/2006.11239) - [code](https://github.com/hojonathanho/diffusion) +AutoencoderKL [paper](https://arxiv.org/abs/1312.6114v11) -OpenAI - CLIP (used in SD1.x and SDXL) [Paper](https://arxiv.org/pdf/2103.00020.pdf) - [github](https://github.com/OpenAI/CLIP) +DDPM [paper](https://arxiv.org/abs/2006.11239) - [github](https://github.com/hojonathanho/diffusion) -LAION OpenClip (used in SD2.x and SDXL) [Announcement](https://laion.ai/blog/large-openclip/) - [github](https://github.com/mlfoundations/open_clip) +CLIP [paper](https://arxiv.org/pdf/2103.00020.pdf) - [github](https://github.com/OpenAI/CLIP) -### Original Compvis Stable Diffusion (CLIP + LDM + AutoencoderKL) -[paper](https://arxiv.org/abs/2112.10752) - [github](https://github.com/CompVis/stable-diffusion) +OpenClip [info](https://laion.ai/blog/large-openclip/) - [github](https://github.com/mlfoundations/open_clip) -## Captioning models: +LAION 5B [paper](https://arxiv.org/abs/2210.08402) - [datasets](https://huggingface.co/laion) + +### Latent Diffusion +Latent Diffusion [paper](https://arxiv.org/abs/2112.10752) - [github](https://github.com/CompVis/latent-diffusion) -- Stable Diffusion [github](https://github.com/CompVis/stable-diffusion) + +SDXL [paper](https://arxiv.org/abs/2307.01952) - [github](https://github.com/Stability-AI/generative-models) + + +### Captioning models Open Flamingo [paper](https://arxiv.org/abs/2308.01390) - [github](https://github.com/mlfoundations/open_flamingo) -Salesforce BLIP/BLIP2 [blip paper](https://arxiv.org/abs/2201.12086) - [blip2 github (LAVIS)](https://github.com/salesforce/LAVIS) - [blip1 github](https://github.com/salesforce/BLIP) +BLIP/BLIP2 [blip paper](https://arxiv.org/abs/2201.12086) - [blip2 github (LAVIS)](https://github.com/salesforce/LAVIS) - [blip1 github](https://github.com/salesforce/BLIP) Kosmos-2 [paper](https://arxiv.org/abs/2306.14824) - [Github](https://github.com/microsoft/unilm/tree/master/kosmos-2) - [Huggingface](https://huggingface.co/microsoft/kosmos-2-patch14-224) -## Optimizers -Tim Dettmers - 8-bit quantization (adamw8bit, etc) [paper](https://arxiv.org/abs/2110.02861) - [github](https://github.com/TimDettmers/bitsandbytes) +### Optimizers -Facebook - D-Adaptation [paper](https://arxiv.org/abs/2301.07733) - [github](https://github.com/facebookresearch/dadaptation) +Adam [paper](https://arxiv.org/abs/1412.6980) +8-bit block-wise quantization [paper](https://arxiv.org/abs/2110.02861) - [github](https://github.com/TimDettmers/bitsandbytes) + +D-Adaptation [paper](https://arxiv.org/abs/2301.07733) - [github](https://github.com/facebookresearch/dadaptation) + +DoWG [paper](https://arxiv.org/abs/2305.16284) \ No newline at end of file