Commit Graph

1278 Commits

Author SHA1 Message Date
Pedro Cuenca 78db11dbf3
Flax safety checker (#825)
* Remove set_format in Flax pipeline.

* Remove DummyChecker.

* Run safety_checker in pipeline.

* Don't pmap on every call.

We could have decorated `generate` with `pmap`, but I wanted to keep it
in case someone wants to invoke it in non-parallel mode.

* Remove commented line

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Replicate outside __call__, prepare for optional jitting.

* Remove unnecessary clipping.

As suggested by @kashif.

* Do not jit unless requested.

* Send all args to generate.

* make style

* Remove unused imports.

* Fix docstring.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-13 17:01:47 +02:00
Patrick von Platen e713346ad1
Give more customizable options for safety checker (#815)
* Give more customizable options for safety checker

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* Finish

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-13 15:52:26 +02:00
Anton Lozhkov 26c7df5d82
Fix type mismatch error, add tests for negative prompts (#823) 2022-10-13 15:45:42 +02:00
Anton Lozhkov e001fededf
Fix dreambooth loss type with prior_preservation and fp16 (#826)
Fix dreambooth loss type with prior preservation
2022-10-13 15:41:19 +02:00
Suraj Patil 0a09af2f0a
update flax scheduler API (#822)
* update flax scheduler API

* remoev set format

* fix call to scale_model_input

* update flax pndm

* use int32

* update docstr
2022-10-13 15:40:01 +02:00
Patrick von Platen f1d4289be8
[Flax] Add test (#824) 2022-10-13 13:55:39 +02:00
Anton Lozhkov 323a9e1f6d
Add diffusers version and pipeline class to the Hub UA (#814)
* Add diffusers version and pipeline class to the Hub UA

* Fallback to class name for pipelines

* Update src/diffusers/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/modeling_flax_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Remove autoclass

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-12 21:54:40 +02:00
pink-red 60c384bcd2
Fix fine-tuning compatibility with deepspeed (#816) 2022-10-12 21:43:37 +02:00
Suraj Patil 008b608f15
[train_text2image] Fix EMA and make it compatible with deepspeed. (#813)
* fix ema

* style

* add comment about copy

* style

* quality
2022-10-12 19:13:22 +02:00
Nathan Lambert 5afc2b60cd
add or fix license formatting in models directory (#808)
* add or fix license formatting

* fix quality
2022-10-12 08:19:35 -07:00
anton-l 96598639c0 Revert an accidental commit
This reverts commit 679c77f8ea.
2022-10-12 17:20:44 +02:00
anton-l 80be0744a6 Merge remote-tracking branch 'origin/main' 2022-10-12 17:18:42 +02:00
anton-l 679c77f8ea Add diffusers version and pipeline class to the Hub UA 2022-10-12 17:18:32 +02:00
Patrick von Platen db47b1e4d9
[Dummy imports] Better error message (#795)
* [Dummy imports] Better error message

* Test: load pipeline with LMS scheduler.

Fails with a cryptic message if scipy is not installed.

* Correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 14:41:16 +02:00
Anton Lozhkov 966e2fc461
Minor package fixes (#809) 2022-10-12 13:22:51 +02:00
Patrick von Platen 6bc11782b7
[Img2Img] Fix batch size mismatch prompts vs. init images (#793)
* [Img2Img] Fix batch size mismatch prompts vs. init images

* Remove bogus folder

* fix

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 13:00:36 +02:00
Patrick von Platen c1b6ea3dce
Update img2img.mdx 2022-10-12 00:52:30 +02:00
Pedro Cuenca 24b8b5cf5e
`mps`: Alternative implementation for `repeat_interleave` (#766)
* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-10-11 20:30:09 +02:00
Omar Sanseviero 757babfcad
Fix indentation in the code example (#802)
Update custom_pipelines.mdx
2022-10-11 20:26:52 +02:00
spezialspezial e895952816
Eventually preserve this typo? :) (#804) 2022-10-11 20:06:24 +02:00
Akash Pannu a124204490
Flax: Trickle down `norm_num_groups` (#789)
* pass norm_num_groups param and add tests

* set resnet_groups for FlaxUNetMidBlock2D

* fixed docstrings

* fixed typo

* using is_flax_available util and created require_flax decorator
2022-10-11 20:05:10 +02:00
Suraj Patil 66a5279a94
stable diffusion fine-tuning (#356)
* begin text2image script

* loading the datasets, preprocessing & transforms

* handle input features correctly

* add gradient checkpointing support

* fix output names

* run unet in train mode not text encoder

* use no_grad instead of freezing params

* default max steps None

* pad to longest

* don't pad when tokenizing

* fix encode on multi gpu

* fix stupid bug

* add random flip

* add ema

* fix ema

* put ema on cpu

* improve EMA model

* contiguous_format

* don't warp vae and text encode in accelerate

* remove no_grad

* use randn_like

* fix resize

* improve few things

* log epoch loss

* set log level

* don't log each step

* remove max_length from collate

* style

* add report_to option

* make scale_lr false by default

* add grad clipping

* add an option to use 8bit adam

* fix logging in multi-gpu, log every step

* more comments

* remove eval for now

* adress review comments

* add requirements file

* begin readme

* begin readme

* fix typo

* fix push to hub

* populate readme

* update readme

* remove use_auth_token from the script

* address some review comments

* better mixed precision support

* remove redundant to

* create ema model early

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* better description for train_data_dir

* add diffusers in requirements

* update dataset_name_mapping

* update readme

* add inference example

Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-11 19:03:39 +02:00
Suraj Patil 797b290ed0
support bf16 for stable diffusion (#792)
* support bf16 for stable diffusion

* fix typo

* address review comments
2022-10-11 12:02:12 +02:00
Henrik Forstén 81bdbb5e2a
DreamBooth DeepSpeed support for under 8 GB VRAM training (#735)
* Support deepspeed

* Dreambooth DeepSpeed documentation

* Remove unnecessary casts, documentation

Due to recent commits some casts to half precision are not necessary
anymore.

Mention that DeepSpeed's version of Adam is about 2x faster.

* Review comments
2022-10-10 21:29:27 +02:00
Nathan Lambert 71ca10c6a4
fix typo docstring in unet2d (#798)
fix typo docstring
2022-10-10 11:25:20 -07:00
Patrick von Platen 22963ed826
Fix gradient checkpointing test (#797)
* Fix gradient checkpointing test

* more tsets
2022-10-10 19:40:33 +02:00
Patrick von Platen fab17528da
[Low CPU memory] + device map (#772)
* add accelerate to load models with smaller memory footprint

* remove low_cpu_mem_usage as it is reduntant

* move accelerate init weights context to modelling utils

* add test to ensure results are the same when loading with accelerate

* add tests to ensure ram usage gets lower when using accelerate

* move accelerate logic to single snippet under modelling utils and remove it from configuration utils

* format code using to pass quality check

* fix imports with isor

* add accelerate to test extra deps

* only import accelerate if device_map is set to auto

* move accelerate availability check to diffusers import utils

* format code

* add device map to pipeline abstraction

* lint it to pass PR quality check

* fix class check to use accelerate when using diffusers ModelMixin subclasses

* use low_cpu_mem_usage in transformers if device_map is not available

* NoModuleLayer

* comment out tests

* up

* uP

* finish

* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py

* finish

* uP

* make style

Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
2022-10-10 18:05:49 +02:00
Nathan Lambert feaa73243d
add sigmoid betas (#777)
* add sigmoid betas

* convert to torch

* add comment on source
2022-10-10 08:28:10 -07:00
Nathan Lambert a73f8b7251 Clean up resnet.py file (#780)
* clean up resnet.py

* make style and quality

* minor formatting
2022-10-10 08:27:50 -07:00
lowinli 5af6eed9ee
debug an exception (#638)
* debug an exception

if dst_path is not a file, it will raise Exception in the function src_path.samefile:
FileNotFoundError: [Errno 2] No such file or directory: '/home/lilongwei/notebook/onnx_diffusion/vae_decoder/model.onnx'

* Update src/diffusers/onnx_utils.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-10-10 13:02:18 +02:00
Patrick von Platen f3983d16ee
[Tests] Fix tests (#774)
* Fix tests

* remove bogus file
2022-10-07 19:38:40 +02:00
Suraj Patil 92d7086366
[img2img, inpainting] fix fp16 inference (#769)
* handle dtype in vae and image2image pipeline

* fix inpaint in fp16

* dtype should be handled in add_noise

* style

* address review comments

* add simple fast tests to check fp16

* fix test name

* put mask in fp16
2022-10-07 17:01:51 +02:00
Suraj Patil ec831b6a72
[schedulers] hanlde dtype in add_noise (#767)
* handle dtype in vae and image2image pipeline

* handle dtype in add noise

* don't modify vae and pipeline

* remove the if
2022-10-07 16:44:19 +02:00
Kevin Turner cb0bf0bd0b
fix(DDIM scheduler): use correct dtype for noise (#742)
Otherwise, it crashes when eta > 0 with float16.
2022-10-07 16:02:32 +02:00
James R T e0fece2b26
Add final latent slice checks to SD pipeline intermediate state tests (#731)
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.

Signed-off-by: James R T <jamestiotio@gmail.com>

Signed-off-by: James R T <jamestiotio@gmail.com>
2022-10-07 15:50:20 +02:00
Justin Chu 75bb6d2d46
Fix ONNX conversion script opset argument type (#739)
The opset argument should be an `int` but was set as a `str`.
2022-10-07 15:47:43 +02:00
YaYaB 906e4105d7
Fix push_to_hub for dreambooth and textual_inversion (#748)
* Fix push_to_hub for dreambooth and textual_inversion

* Use repo.push_to_hub instead of push_to_hub
2022-10-07 11:50:28 +02:00
Patrick von Platen 7258dc4943 remove bogus folder no.2 2022-10-07 11:21:24 +02:00
Patrick von Platen c93a8cc901 remove bogus folder 2022-10-07 11:20:26 +02:00
Patrick von Platen 9a95414ea1 Bump to v0.5.0dev0 2022-10-07 11:17:55 +02:00
Patrick von Platen 91ddd2a25b Release: v0.4.1 2022-10-07 10:37:31 +02:00
apolinario fdfa7c8f15
Change fp16 error to warning (#764)
* Swap fp16 error to warning

Also remove the associated test

* Formatting

* warn -> warning

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-07 10:31:52 +02:00
anton-l d3f1a4c0f0 Revert "Bump to v0.5.0.dev0"
This reverts commit 9531150128.
2022-10-06 20:42:14 +02:00
Patrick von Platen ae672d58ef
[Tests] Lower required memory for clip guided and fix super edge-case git pipeline module bug (#754)
* [Tests] Lower required memory

* fix

* up

* uP
2022-10-06 19:15:26 +02:00
anton-l 2fa55fc7d4 Merge remote-tracking branch 'origin/main' 2022-10-06 19:12:21 +02:00
anton-l 9531150128 Bump to v0.5.0.dev0 2022-10-06 19:12:01 +02:00
Suraj Patil 737195dd2e Created using Colaboratory 2022-10-06 19:08:00 +02:00
Suraj Patil 435433cefd
Update clip_guided_stable_diffusion.py 2022-10-06 18:38:09 +02:00
anton-l 970e30606c Revert "[v0.4.0] Temporarily remove Flax modules from the public API (#755)"
This reverts commit 2e209c30cf.
2022-10-06 18:35:40 +02:00
anton-l c15cda03ca Bump to v0.4.1.dev0 2022-10-06 18:34:59 +02:00