Viewing a single comment thread. View all comments

HateRedditCantQuitit t1_j7l30f2 wrote

I hate getty as much as anyone, but I'm going to go against the grain and hope they win this. Imagine if instead of getty vs stability, it was artstation vs facebook or something. The same legal principles must apply.

In my ideal future, we'd have things like

- research use is free, but commercial use requires opt-in consent from content creators

- the community adopts open licenses like e.g. copyleft (if you use a GPL9000 dataset, the model must be GPL too, or whatever) or some other widely used opt-in license.

5

JustOneAvailableName t1_j7le6dw wrote

> but commercial use requires opt-in consent from content creators

You might as well ban it directly for commercial use with opt in

7

TaXxER t1_j7ojt22 wrote

As much as I like ML, it’s hard to argue that training ML models on data without consent, let alone even copyrighted data, would somehow be OK.

3

JustOneAvailableName t1_j7oknmi wrote

Copyright is about redistribution and we're talking pubicly available data. I don't want/need to give consent to specific people/companies to allow them to read this comment. Nor do I think it should now be up to reddit to decide what is and isn't allowed

3

TaXxER t1_j7omop6 wrote

Generative models do redistribute though, often outputting near copies:

https://openaccess.thecvf.com/content/WACV2021/papers/Tinsley_This_Face_Does_Not_Exist..._But_It_Might_Be_Yours_WACV_2021_paper.pdf

https://arxiv.org/pdf/2203.07618.pdf

Copyright does not only cover republishing, but also covers derived work. I think it is a very reasonable position to consider all generative model output o for which some training set image Xi had a particularly large influence on o, to be derived work from Xi.

Similar story holds true for code generation models and software licensing: copilot was trained on lots of software repos that had software licenses that require all derived work to be licensed under an at least equally permissive license. Copilot may very well output a specific code snippets particularly based on what it has seen in a particular repo, thereby potentially opening up the user to the obligation to the licensing constraints that come with deriving work from that repo.

I’m an applied industry ML researcher myself, and am very enthousiastic about the technology and state of ML. But I also think that as a field as a whole we have unfortunately been careless about ethical and legal aspects.

2

scottyLogJobs t1_j7llrrl wrote

Why? Compare the top two images. It is a demonstration that they trained on Getty images but there’s no way anyone could argue that the nightmare fuel on the right deprives Getty of any money. Do you remember when Getty sued Google images and won? Sure Google is powerful and makes plenty of money, but now image search is way worse for consumers than it was a decade ago- you can’t just open the image or even a link to the image, you have to follow it back to their page and dig around for it, probably never finding it at all. Ridiculous that effectively embedding a link isn’t considered fair use, you’d still need to pay to use a Getty image 🤷‍♂️

Setting aside the fact that Getty is super hypocritical and constantly violates copyright law, and then effectively uses their litigators to push around smaller groups, if they win it’s just going to be another step that means only the big companies have access to data, making it impossible for smaller players to compete.

People fighting against technological advancement and innovation are always on the wrong side of history. There will always be a need for physical artists, digital artists, photographers, etc, because the value of art is already incredibly subjective, the value is generated by the artist, not the art, and client needs are so specific, detailed and iterative that an AI can’t achieve them.

Instead of seeing this tool as an opportunity for artists, they fight hopelessly against innovation and throw their lot in with huge bully companies like Getty Images.

4