Frustrated by creepy fingers in your Stable Diffusion masterpieces? You’re not alone! This powerful tool can churn out amazing art, but hands can sometimes end up looking like melted candlesticks. Fear not, fellow creators! This guide dives right in, showing you easy-to-use tricks to fix those wonky digits.
What makes this different? We focus on clear, simple solutions anyone can use, no matter your experience level. By making these fixes accessible, we want to empower everyone – researchers, artists, everyone! – to create hands that look as incredible as the rest of your art. Let’s banish the finger monsters and get creating!
Fix Hand in Stable Diffusion
Negative Prompt
One way is to use a negative prompt. A negative prompt is a phrase that you can add to your text description that tells Stable Diffusion to avoid generating certain things. In this case, you could use a negative prompt like “no bad hands” or “no distorted fingers.”
- Be specific. The more specific you are with your negative prompt, the more likely Stable Diffusion is to avoid generating what you don’t want. For example, instead of saying “no bad hands,” you could say “no distorted fingers.”
- Use multiple negative prompts. You can use multiple negative prompts to further discourage Stable Diffusion from generating what you don’t want. For example, you could use the negative prompts “no bad hands” and “no distorted fingers.”
- Experiment. The best way to learn how to use negative prompts is to experiment. Try different negative prompts and see what works best for you.
Here are some examples of negative prompts that you can use to fix bad hands and fingers in Stable Diffusion:
- bad hands
- no distorted fingers
- no ugly hands
- no creepy hands
- unnatural hands
- no robotic hands
- alien hands
- no cartoon hands
- no pixelated hands
- blurry hands
Inpaint Mode
So, you’ve generated this awesome image in Stable Diffusion, but those hands are just…off? Fear not, fellow image conjurer! The “Inpaint” mode is your secret weapon for hand fixes.
Here’s the magic: mask out the problem zone with a brush (think wonky fingers or a misplaced thumb). Set the mode to “only mask” so Inpaint focuses just on that area. Then, play with the diffusion setting between 0.30 and 0.55 to see what works best.
Want even more control? Experiment with prompts! Throw in variations like “hand” or “perfect hand” in the prompt bracket. It’s like giving Inpaint a little nudge in the right direction. With this combo, you can seriously improve those hand situations and take your Stable Diffusion creations to the next level.
Fooocus
Fooocus is this awesome web UI that makes using Stable Diffusion, especially the powerful SDXL Model, a total breeze. Plus, it’s got what seems like the best inpainting mode out there, which works with any Stable Diffusion interface you might be using.
Here’s the gist: just pop your image into the “inpaint” tab and get ready for the magic. The “enhance” mode is your friend for fixing stuff – want a perfect hand? Just tell Focus with prompts like “realistic hand” or “fully detailed hand.” Play around with the diffusion settings between 0.30 and 0.55, and bam! You’ll be a hand-fixing pro, taking your Stable Diffusion creations to superstar status.
Control Net
Fixing finger stability in Stable Diffusion can be achieved by utilizing the Control Net feature to control the generation process beyond the simple text prompt. Control Net offers various methods to control Stable Diffusion, but for fixing hand-related issues in the generated images, we will employ a different approach. Here’s a step-by-step guide:
- Install Control Net in the Stable Diffusion web UI and select the desired control net method for fixing the hand issue. In this case, we recommend using “openopse” to estimate the pose of your character.
- Generate the character using Stable Diffusion and observe that the hands might appear distorted or inaccurate. Download the generated image.
- Open the downloaded image in an image editing software that supports layers. We suggest using GIMP as it is a free option, or you can use Photopea if you prefer not to download any software.
- Find a suitable hand pose image to replace the distorted hand. There are multiple ways to obtain such an image. You can take a screenshot of the hand 3D model from websites like Sketchfab or create your own hand pose using software like Blender. Alternatively, you can search for a hand image on Google or stock sites. The goal is to find a hand image that roughly matches the position of the character’s hand. It doesn’t need to be perfect.
- Once you have the hand pose image, composite it onto the generated image using the image editing software. Align the hand image with the approximate position of the character’s hand. This will involve placing the hand image as a separate layer over the existing hand in the generated image.
- Import the composited image back into the Stable Diffusion web UI.
- Open the Control Net settings and choose “depth” as the preprocessor and “control net method” as the selected method.
- Proceed to generate the image using the edited composite as the input. You should observe a noticeable improvement in the quality and appearance of the hand.
- If you wish to preserve other elements of the previously generated image while fixing the hand, you can employ the “inpain” method. This involves importing the composited image into the “inpaint” area of the web ui. Then, you can paint a mask specifically in the hand area and set the denoise scale below 0.5. Additionally, set the mask mode to “mask only” to isolate the hand region. Finally, click on “generate” to initiate the image generation process.
Use Textual Inversion Embeddings
To fix the issue of bad, ugly, and distorted hands in Stable Diffusion, we will utilize a method called “Use Textual Inversion Embeddings.” Textual Inversion is a technique that allows us to capture novel concepts from a small number of example images. While initially demonstrated with a latent diffusion model, it has also been applied to other variants like Stable Diffusion. By leveraging Textual Inversion, we can better control the generation of images from text-to-image pipelines.
Many individuals have trained their own Textual Inversion Embeddings specifically on images of bad hands. To make use of these trained embeddings in Stable Diffusion, follow the steps outlined below:
Download the Textual Inversion Embeddings from the provided list. These embeddings have been trained on examples of bad hands.
Once you have obtained the embeddings, you can incorporate them into Stable Diffusion by adding them to the negative prompt. When generating images, these embeddings will be activated and utilized.
It’s important to note that since you have added the embeddings in the negative prompt and they have been trained on bad hand images. However, by employing this technique, you increase the likelihood of generating better hand poses and reducing the occurrence of distorted or undesirable results.