[P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) Submitted by cloneofsimo t3_zfkqjh on December 8, 2022 at 1:27 AM in MachineLearning 26 comments 114
ThatInternetGuy t1_izenxjo wrote on December 8, 2022 at 3:41 PM This could be a great choice between textual inversion and a full-blown Dreambooth. I think it could benefit from saving the text encoder too (about 250MB half-precision). Permalink 1
Viewing a single comment thread. View all comments