Submitted by minimaxir t3_z733uy in MachineLearning
I just published a blog post with many academic experiments on getting good results from Stable Diffusion 2.0, showing that negative prompts are the key with its new text encoder:
https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/
I also released Colab Notebooks to reproduce the workflow and use the negative embeddings yourself (links in comment due to antispam filters for too many URLs)
sam__izdat t1_iy5dmp7 wrote
You may get generally better results if you remove the nonsense from the embedding, like "too many fingers" and "bad anatomy." It made some people on /r/StableDiffusion very angry, but I ran a comparison for those (several, actually), and it went exactly as expected. Some of the words in the original embedding (e.g. lowres, text, error, blurry, ugly, etc) are probably doing something like what was intended. Most of the rest are a superstitious warding ritual.