Process

Guide: Stable Diffusion’s CFG Scale Explained

In Stable Diffusion, CFG stands for Classifier Free Guidance scale. CFG is the setting that controls how closely Stable Diffusion should follow your text prompt. It is applied in text-to-image (txt2img) and image-to-image (img2img) generations. The higher the CFG value, the more strictly it will follow your prompt, in theory. The default value is 7, which gives a good balance between creative freedom and following your direction. A value of 1 will give Stable Diffusion…

Continue reading

Process

Guide: What Is A Stable Diffusion Seed and How To Use It

Since I started using Stable Diffusion, I have seen a lot of confusion and misinformation on the Internet on what seeds are, how Stable Diffusion uses them and how users can go about using them. So to help clear things up, I am providing this free guide to explain what seeds are and how you can use them to fine tune your generated images. What is a Seed? Here’s the answer, plain and simple: a…

Continue reading

Process

Tutorial: How to Set Up ControlNet in Stable Diffusion Web UI

So maybe you’ve heard about ControlNet, or maybe you haven’t but have seen some of the truly amazing images that it has been able to achieve, but what is it? How can you set up ControlNet and start using it yourself in Stable Diffusion? Well luckily this guide is here to help you get started! What is ControlNet and why use it? ControlNet takes the standard img2img tool in Stable Diffusion and ratchets it up…

Continue reading

Process

Get All the Features of Stable Diffusion Without Installing It Yourself

Although Stable Diffusion is completely free and open source, there are still quite a few barriers to get it yourself. Sure you can get nicely packaged Stable Diffusion based apps such as MidJourney or use online-only text to image generators like Lexica, but you’re here because you want to take the next step and have significantly more control over your images and have access to all of Stable Diffusion’s features. This article will tell you…

Continue reading

Process

How to Train an Embedding in Stable Diffusion

If you want to take your AI image generation to the next level in Stable Diffusion to consistently get the same style across many images with different subjects, then training an embedding is worth your while. There are a few situations in which this could be helpful: This tutorial uses screenshots from Stable Diffusion Automatic 1111 v1.5 Web UI under RunPod.io What is an Embedding? The embedding layer encodes inputs such as text prompts into low-dimensional vectors…

Continue reading

Process

GPT3’s DaVinci 3 vs 2 Models

Recently OpenAI has released a new model version in GPT3, Davinci-3. This model greatly improves on the formerly most powerful model, Davinci-2. However it also has some oddities that are worth knowing. You can select Davinci-3 in the Model drop down menu on the right side of the Playground web interface. Poetry and the Concept of Rhyming First, the concept of rhyming is much more advanced in the Davinci-3 model, with every couplet having a…

Continue reading