parent
b4bb5345cd
commit
3584f6b345
|
@ -33,7 +33,7 @@ from io import BytesIO
|
||||||
from diffusers import StableDiffusionImg2ImgPipeline
|
from diffusers import StableDiffusionImg2ImgPipeline
|
||||||
```
|
```
|
||||||
|
|
||||||
Load the pipeline
|
Load the pipeline:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
device = "cuda"
|
device = "cuda"
|
||||||
|
@ -42,7 +42,7 @@ pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
Download an initial image and preprocess it so we can pass it to the pipeline.
|
Download an initial image and preprocess it so we can pass it to the pipeline:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
|
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
|
||||||
|
@ -55,7 +55,7 @@ init_image
|
||||||
|
|
||||||
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg)
|
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg)
|
||||||
|
|
||||||
Define the prompt and run the pipeline.
|
Define the prompt and run the pipeline:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
prompt = "A fantasy landscape, trending on artstation"
|
prompt = "A fantasy landscape, trending on artstation"
|
||||||
|
@ -67,7 +67,7 @@ prompt = "A fantasy landscape, trending on artstation"
|
||||||
|
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
Let's generate two images with same pipeline and seed, but with different values for `strength`
|
Let's generate two images with same pipeline and seed, but with different values for `strength`:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
generator = torch.Generator(device=device).manual_seed(1024)
|
generator = torch.Generator(device=device).manual_seed(1024)
|
||||||
|
@ -89,9 +89,9 @@ image
|
||||||
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_14_output_1.jpeg)
|
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_14_output_1.jpeg)
|
||||||
|
|
||||||
|
|
||||||
As you can see, when using a lower value for `strength`, the generated image is more closer to the original `image`
|
As you can see, when using a lower value for `strength`, the generated image is more closer to the original `image`.
|
||||||
|
|
||||||
Now let's use a different scheduler - [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers#diffusers.LMSDiscreteScheduler)
|
Now let's use a different scheduler - [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers#diffusers.LMSDiscreteScheduler):
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from diffusers import LMSDiscreteScheduler
|
from diffusers import LMSDiscreteScheduler
|
||||||
|
|
Loading…
Reference in New Issue