Update img2img.mdx (#2688)

Fix typos
This commit is contained in:
M. Tolga Cangöz 2023-03-15 20:15:59 +03:00 committed by GitHub
parent b4bb5345cd
commit 3584f6b345
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 6 additions and 6 deletions

View File

@ -33,7 +33,7 @@ from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline from diffusers import StableDiffusionImg2ImgPipeline
``` ```
Load the pipeline Load the pipeline:
```python ```python
device = "cuda" device = "cuda"
@ -42,7 +42,7 @@ pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion
) )
``` ```
Download an initial image and preprocess it so we can pass it to the pipeline. Download an initial image and preprocess it so we can pass it to the pipeline:
```python ```python
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
@ -55,7 +55,7 @@ init_image
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg) ![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg)
Define the prompt and run the pipeline. Define the prompt and run the pipeline:
```python ```python
prompt = "A fantasy landscape, trending on artstation" prompt = "A fantasy landscape, trending on artstation"
@ -67,7 +67,7 @@ prompt = "A fantasy landscape, trending on artstation"
</Tip> </Tip>
Let's generate two images with same pipeline and seed, but with different values for `strength` Let's generate two images with same pipeline and seed, but with different values for `strength`:
```python ```python
generator = torch.Generator(device=device).manual_seed(1024) generator = torch.Generator(device=device).manual_seed(1024)
@ -89,9 +89,9 @@ image
![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_14_output_1.jpeg) ![img](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_14_output_1.jpeg)
As you can see, when using a lower value for `strength`, the generated image is more closer to the original `image` As you can see, when using a lower value for `strength`, the generated image is more closer to the original `image`.
Now let's use a different scheduler - [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers#diffusers.LMSDiscreteScheduler) Now let's use a different scheduler - [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers#diffusers.LMSDiscreteScheduler):
```python ```python
from diffusers import LMSDiscreteScheduler from diffusers import LMSDiscreteScheduler