@ChipthensfwMonk
@lemmynsfw.comThese took awhile to get right. The model is a blend of Real Vision 4, LazyMix+, and URPM. The top three were done with a ControlNet (I can't remember if it was Canny or Depth) to aid the pose and the face was improved with InPaint. The bottom one uses the "openblouse" lora, I think. I am finding that DPM++ SDE Karras is producing the best results, but I haven't tried them all systematically to prove that.
Have you looked at this LoRa managing extension? I saw it mentioned on the regional prompt GitHub readme.
This extension summary:
By associating LoRA's insertion position in the prompt with "AND" syntax, LoRA's scope of influence is limited to a specific subprompt.
For me, yes. It's been an experiment to generate a foreground and background. I'm not sure it's a trend though, but I have posted two versions of this idea. Are you noticing it more widespread?
With AI generated stuff, I am trying to make more fantastical stuff that you wouldn't see in the real world (generally).
While I generally agree, there is some "artistry" in making prompts that work and figuring out all of the tricks to get better results. My own prompts have gotten a lot more interesting with custom checkpoint merging and dynamic and region prompting; these all took time to learn. I think posting the prompt/sources should be optional and people should request it if they want it, perhaps in a private message.
I understand that "AI prompter" is becoming a job for some people at some companies and I can now start to see why. As you try to generate more realistic and interesting images, you have to know more about how stable diffusion and the many extensions work. It goes pretty deep—I feel like I am only scratching the surface.
I agree with the anime versus realistic stuff. I think posts should have maybe a "realistic" "anime" "artistic" or other tag. These should be defined.