Process

Guide: What Is A Stable Diffusion Seed and How To Use It

Since I started using Stable Diffusion, I have seen a lot of confusion and misinformation on the Internet on what seeds are, how Stable Diffusion uses them and how users can go about using them. So to help clear things up, I am providing this free guide to explain what seeds are and how you can use them to fine tune your generated images.

What is a Seed?

Here’s the answer, plain and simple: a seed is a number from which Stable Diffusion generates noise. That’s it. Here are all the things that a seed is NOT:

  • A seed is NOT the image of noise itself
  • A seed does NOT contain all parameters used to generate an image
  • A seed is NOT associated with a specific text prompt
  • A seed does NOT contain specific characteristics like poses, clothing, backgrounds, artistic styles, etc.

Stable Diffusion’s noise generator is not truly random, which means it will reliably reproduce a pattern of noise from a seed number. Likewise, the algorithm that Stable Diffusion uses to produce an image from that noise is also not entirely random, even though to our human eyes it looks random. Explaining this process is worthy of an article all on its own, but for the purposes of this guide, the below images are good enough to demonstrate what is going on behind the scenes in Stable Diffusion:

How Stable Diffusion generates an image from noise produced using a seed
Example showing how Stable Diffusion generates images from initial noise produced from a seed number
(Images from Chris McCormick)

The implication of the above is that image generation in Stable Diffusion is actually repeatable. That opens up a lot of potential use cases:

  • You can reliably reproduce a generated image across multiple sessions if you input the seed number as well as the same prompt and all of the same parameters you used to create the image in the first place
  • A user can work off of someone else’s generated image by starting from the same seed, prompt and parameters as them
  • You can make minor tweaks to an image by slightly changing the prompt or parameters without significantly altering the images overall composition
  • Some seeds have been identified by Stable Diffusion user communities as having a higher probability of producing images with specific color palettes or compositions. Knowing what those seeds are and using them therefore gives you a higher probability of getting an output image containing a characteristic you want

/u/wonderflex did some very detailed experiments on how different seeds effect images generated from the same prompt describing Katy Perry. You can view the full description of their methodology and results, but here is my summary of their conclusions:

  • When the exact same prompt and parameters are used but the seed number is changed, you can get very different looking output images
  • If you keep the same seed number but change the text prompt by adding a single word modifier, you can alter the output images without significantly changing their general look or color palette
Study showing the relationship between seeds and prompts in Stable Diffusion
Study made by /u/wonderflex showing the relationship between seeds (rows) and prompts (columns)

The Seed Field in the Stable Diffusion Web UI

So seeds are conceptually very powerful, so how do you go about using them yourself? In the Stable Diffusion Web UI under both the text2img and img2img tabs you will see a field called Seed. Here are some things to keep in mind:

  • The default setting for Seed is -1, which means that Stable Diffusion will pull a random seed number to generate images off of your prompt.
  • You can also type in a specific seed number into this field.
  • The dice button to the right of the Seed field will reset it to -1.
  • The green recycle button will populate the field with the seed number used in your generated image.
The Seed field under the txt2img tab in the Stable Diffusion Web UI
The Seed field under the txt2img tab in the Stable Diffusion Web UI

How to Use Seeds to Maintain Consistency in Your Images

Let’s say you’ve already generated a batch of images in the Stable Diffusion Web UI and you’re generally happy with them and want to keep their overall look, color and composition, but you just want to make some minor modifications. Below is an example of a princess that I generated while I was making images to accompany the classic fairy tale, Prince Hyacinth and the Dear Little Princess:

Populate the seed field in the Stable Diffusion Web UI by selecting the generated image and hitting the green recycling button
Populate the seed field by selecting the generated image and hitting the green recycling button

For the purposes of this guide, let’s say I like this image overall but I want the princess to have blonde hair instead of brunette. In this instance, all I have to do is select one of the images in the batch, hit the button with the green recycle symbol, and the seed that Stable Diffusion used to generate the princess is populated into the Seed field.

To reproduce this image with blond hair, here is what should, can and cannot be done with respect to the Stable Diffusion parameters:

  • SHOULD: Specify the text prompt that I want blond hair in, and for added insurance I can specify brunette hair in the negative prompt
  • CAN: In general it is ok to change the number of sampling steps and the CFG scale a little bit and still reproduce a similar result to the first princess, but you will see some slight variation in your new images. It’s hard to give specific advice on how much change is allowable because different sampling methods have different recommended step and CFG counts, so you may need to experiment with these yourself to get what you want.
  • CANNOT: Do NOT change the width or height, or else your image will come out completely different! You also risk getting a very different image if you change the sampling method, but because there are so many and the ones available are constantly changing, your mileage will vary a lot. To minimize variation and risk of unexpected changes, it’s best to keep the sampling method the same to what you originally used.

So after simply adding “blonde hair” to the text prompt, keeping all other parameters the same and using seed 1342333610, this is the new output I get:

Modified image produced from the same seed as the first image

As you can see, the second batch of images of the princess looks extremely similar to the first, but she now has blonde hair. Some other small details like her necklace and the embroidery on her dress have changed, but the overall look of the image has stayed the same.

Using Seeds to Modify Your Generated Images

There are many reasons why you want to start off with a specific seed. Maybe you want to replicate someone else’s AI generated work. Maybe you are overwhelmed and want to have a consistent starting point. Or maybe you know of a specific seed out there that has a high probability of producing a certain result, regardless of prompt. For this guide, let’s take this latter use case as an example.

In the above study produced by /u/wonderflex of Katy Perry, we can observe that seed 8002 seems to have a tendency to produce red and gold in the bottom half of the image. Let’s say that for the image of my Dear Little Princess, I also want her in bright red and gold clothing. I can then input seed 8002 to see how that seed affects the image of the princess versus seed 1342333610. With the exact same text prompt and parameters, here is what she looks like using seed 8002:

Princess portrait using Stable Diffusion seed 8002

The princess is now a lot more blinged out and flamboyant in a bright red dress. Keep in mind that this method of selecting seeds to change specific details in an image may not give you the same control as simply typing in a different text prompt.

How to Find Seeds

There are many, AI generated images that are now freely available with their workflows and seeds posted by users within various Stable Diffusion communities on Reddit, Discord and other forums. One of the easiest and most efficient ways of sorting through images to find seeds is through Lexica. On their main page, simply click the black circular Filter button to the right of the search bar and select Stable Diffusion 1.5, type in a description for the kind of AI generated images you’re looking for, and hit the Search button.

NOTE: If you do not change the Lexica search filter to Stable Diffusion 1.5, then the default option is Lexica Aperature, whose results will NOT include the seed number!

Searching for Stable Diffusion generated images on Lexica
Searching for AI generated images on Lexica

Select an image in the search results. A popup will appear showing the text prompt the user input to get that image and their parameters. If you selected Stable Diffusion 1.5 in the search filter settings, then this window will also provide the seed number for that image.

Stable Diffusion v1.5 image prompt and seed information provided by Lexica
Image prompt and seed information provided by Lexica

With these tools and information, you should have everything you need to get started using seeds in the Stable Diffusion Web UI. Good luck and have fun!

Stable Diffusion Tutorial: How to In Paint