gkaykck
gkaykck t1_j1v72ay wrote
Reply to comment by Featureless_Bug in [Discussion] 2 discrimination mechanisms that should be provided with powerful generative models e.g. ChatGPT or DALL-E by Exnur0
I think if this is going to be implemented, it has to be at model level, not as an extra layer on top. Just thinking outloud with my not so great ML knowledge, if we mark every image in training data with some special and static "noise" which is unnoticable to human eyes, all the images generated will be marked with the same "noise". So this is for running open source alternatives on your own cluster. I think if this kind of "watermarking" will be implemented, it needs to be done in the model itself.
When it comes to "why would OpenAI do it", it would be nice for them to be able to track where does their generated pictures/content end up to for investors etc. This can also help them "license" the images generated with their models instead of charging per run.
gkaykck t1_j1uzq8f wrote
Reply to comment by Featureless_Bug in [Discussion] 2 discrimination mechanisms that should be provided with powerful generative models e.g. ChatGPT or DALL-E by Exnur0
Personally, I'd like to be able to filter out AI generated content from my feeds sometimes.
gkaykck t1_jcrel1c wrote
Reply to comment by BalorNG in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Not cool