Submitted by Wiskkey t3_10vg97m in MachineLearning
[deleted] t1_j7hjm4p wrote
[deleted]
PHEEEEELLLLLEEEEP t1_j7hm3td wrote
Top legal mind of reddit
[deleted] t1_j7ilc8a wrote
[deleted]
[deleted] t1_j7hyr6a wrote
[deleted]
MisterBadger t1_j7idrz1 wrote
>Humans do take inspiration from others' work...
Ugh. This justification is creaky and useless.
Machines take instructions, and have zero inspiration.
Human artists aren't an endless chain of automated digital art factories producing mountains of art "by_Original_Artist".
One unimaginative guy copycatting another more imaginative artist is not going to be able to flood the market overnight with thousands of images that substantially replace the original media creator.
Centurion902 t1_j7ii94j wrote
This doesn't even mean anything unless you define inspiration.
MisterBadger t1_j7jd47m wrote
Nothing means anything if you're unfamiliar with the commonly understood meaning of words.
The dictionary definition of "inspiration":
>the process of being mentally stimulated to do or feel something, especially to do something creative.
Diffusion models are not, and do not have minds.
Centurion902 t1_j7jqu3d wrote
I see nothing about minds in thay definition.
MisterBadger t1_j7jyejy wrote
Is English a second language for you?
Mentally (adverb) - in a manner relating to the mind.
tsujiku t1_j7kj8y0 wrote
Is a "mind" a blob of flesh or is it the combination of chemical interactions that happen in that blob of flesh.
Could a perfect simulation of those chemical interactions be considered a "mind?"
What about a slightly simplified model?
How far down that path do you have to go before it's no longer considered a "mind?"
You act like there are obvious answers to these questions, but I don't think you would have much luck if you had to get everyone to agree with you.
MisterBadger t1_j7kjls1 wrote
Y'all need to stop stretching definitions of words past the breaking point.
I am not "acting like" anything. I simply understand the vast difference between a human brain and a highly specialized machine learning algorithm.
Diffusion models are not minds and do not have them.
You only need a very basic understanding of machine learning VS human cognition to be aware of this.
AI =|= Actual Intelligence;
Stable Diffusion =|= Sentient Device.
[deleted] t1_j7n0d24 wrote
[deleted]
GusPlus t1_j7i4c70 wrote
I feel like the fact that the AI produces images with the Getty Images watermark is pretty decent proof that it copied images.
Ne_Nel t1_j7i4u96 wrote
Thats not much smarter than that comment tbh.
GusPlus t1_j7i5lt3 wrote
I’d like to know how it was trained to produce GI watermark without copying GI images for training data.
Ne_Nel t1_j7i67yp wrote
What are you talking about? The dataset is open source and there are thousands of Getty images. That isn't the discussion here.
orbital_lemon t1_j7idllq wrote
It saw stock photo watermarks millions of times during training. Nothing else in the training data comes even close. Even at half a bit per training image, that can add up to memorization of a shape.
Apart from the handful of known cases involving images that are duplicated many times in the training data, actual image content can't be reconstructed the same way.
pm_me_your_pay_slips t1_j7l6icx wrote
note that the VQ-VAE part of the SD model alone can encode and decode arbitrary natural/human-made images pretty well with very little artifacts. The diffusion model part of SD is learning a distribution of images in that encoded space.
orbital_lemon t1_j7lel1d wrote
The diffusion model weights are the part at issue, no? The question is whether you can squeeze infringing content out of the weights to feed to the vae.
f10101 t1_j7is2sw wrote
They undeniably did copy them for training, which is the allegation. Not even Stability would deny that.
The question is whether doing that is legal. Plain reading of the US law suggests it is legal to me, but Getty will argue otherwise.
Viewing a single comment thread. View all comments