TrueBlueDreamin OP t1_j1y4zjh wrote
Reply to comment by JanssonsFrestelse in [P] I built an API that makes it easy and cheap for developers to build ML-powered apps using Stable Diffusion by TrueBlueDreamin
We can support regularization with your own class images if you'd want, however it's recommended to use model generated regularization images for prior preservation. You don't want to introduce bias into the model with curated images.
We train the text encoder as well, correct.
You should be able to train multiple concepts/subjects although there is an unsolved problem regarding bleeding when used in the same prompt. Shoot me a DM and we can probably figure something out!
JanssonsFrestelse t1_j1ydl3c wrote
Curated images would be generated by the model being trained using the same prompt for reg images as for the subject training images (found via clip interrogation, swapping out e. g. "a woman" to my subject's token). Not a big deal though, if you can train the 768x768 model I'll try it out. Can't run it locally and colabs for the 768 model have been unreliable. Might write my own later on if the model trained by you shows good quality.
Edit: probably not much use having the exact same prompt, but I'm thinking something similar to the clip classification of the image(s) + the general style/concept you want to learn. Or do you see some issues with the method I've described?
Viewing a single comment thread. View all comments