Tutorial: How to Use InPainting in the Stable Diffusion Web UI
For my latest original, free bedtime story, I decided to generate a lot of AI art to accompany the text. Here is how I did it by using img2img and inpainting in Stable Diffusion.
For my latest original, free bedtime story, I decided to generate a lot of AI art to accompany the text. Here is how I did it by using img2img and inpainting in Stable Diffusion.
For Sarah in the Secret Garden I time-boxed the AI generation part to one hour. That time was for both the text generation and the ai image generation that went along with it.
We have decided to change the AI service we use to run the Stable Diffusion webui. Starting out, we chose the AWS based Stable Diffusion – Create Stunning Images on Your Cloud GPU Server. We have since switched to Runpod.io. Now we use their RTX A4000 instances for running Stable Diffusion webui, which they have out of the box with an instance template. Runpod gives more flexibility One of the better parts of Runpod is…
I’m going to admit this right now: using AI art generators and text writers is a lot harder than I anticipated. Part of that is because I’m still learning AI prompt parameters and settings. It also doesn’t help that I’m no expert in image editing software like Photoshop and GIMP. However, the biggest challenges for the AI generated art for Little Frog, Big Dragon were of my own making: I selected both a style and subject that are really not well suited for publicly available Stable Diffusion models.
I’ll admit that when we came up with the idea of doing a blog, AI generated kids stories were not at the top of our list. We’ve been playing with AI’s to generate images for some time now, but we initially considered it just an amusing distraction. I quickly realized a niche that AI generated art that to our knowledge has been completely overlooked by others: illustrated picture books for kids.