* [Better scheduler docs] Improve usage examples of schedulers
* finish
* fix warnings and add test
* finish
* more replacements
* adapt fast tests hf token
* correct more
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Integrate compatibility with euler
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add method to enable cuda with minimal gpu usage to stable diffusion
* add test to minimal cuda memory usage
* ensure all models but unet are onn torch.float32
* move to cpu_offload along with minor internal changes to make it work
* make it test against accelerate master branch
* coming back, its official: I don't know how to make it test againt the master branch from accelerate
* make it install accelerate from master on tests
* go back to accelerate>=0.11
* undo prettier formatting on yml files
* undo prettier formatting on yml files againn
* [CI] Add Apple M1 tests
* setup-python
* python build
* conda install
* remove branch
* only 3.8 is built for osx-arm
* try fetching prebuilt tokenizers
* use user cache
* update shells
* Reports and cleanup
* -> MPS
* Disable parallel tests
* Better naming
* investigate worker crash
* return xdist
* restart
* num_workers=2
* still crashing?
* faulthandler for segfaults
* faulthandler for segfaults
* remove restarts, stop on segfault
* torch version
* change installation order
* Use pre-RC version of PyTorch.
To be updated when it is released.
* Skip crashing test on MPS, add new one that works.
* Skip cuda tests in mps device.
* Actually use generator in test.
I think this was a typo.
* make style
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Init CI
* clarify cpu
* style
* Check scripts quality too
* Drop smi for cpu tests
* Run PR tests on cpu docker envs
* Update .github/workflows/push_tests.yml
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Try minimal python container
* Print env, install stable GPU torch
* Manual torch install
* remove deprecated platform.dist()
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>