duboispourlhiver

duboispourlhiver t1_jbwmn0u wrote

The computer doesn't compute all the moves and doesn't know the exact mathematically best move. It uses digital neurons to infer rules from a huge number of games and find very good moves. I call this intelligence (artificial intelligence)

11

duboispourlhiver t1_jbwmh2g wrote

We are often using neural networks whose training is finished. The weights are fixed for this attack to work. This is obvious, but I would like to underline the fact that biological neural networks are never fixed.

8

duboispourlhiver t1_ja3xgph wrote

Yeah, let meta pay for the salaries of top lm scientists, that's the most important thing. Those scientists publish papers, sometimes even code or parameters. And eventually they leave meta and use their skill in more open ways.

It's like the fundamental paper about deep learning that was published by Google scientists. The fact they worked at Google turned out to be pretty anecdotal after a few years.

8

duboispourlhiver t1_j9tgc8p wrote

I agree and I've been trying to find counterarguments to this practical problem, but yet I have found none serious. If anything has any idea why this could be false, please discuss!

The best counterargument I have found so far is that there could be programs able to detect if an image is AI generated. I had studied this point some weeks ago and I don't think such programs will exist.

1

duboispourlhiver t1_j9tfsp3 wrote

Thank you for this long and interesting point of view.

I think that without copyright, creative work can still be a source of income thanks to work for hire and crowdfunding. I've aligned my actions with my anti copyright beliefs for years and am only getting money in the form of work for hire. I feel more relaxed this way. But other opinions and ways of life are completely ok.

2

duboispourlhiver t1_j9piugz wrote

>The decision goes pretty deep into whether prompts or subsequent editing are sufficient to qualify the images as creative, concluding that they aren't.

They decided that prompts are not sufficient, but subsequent editing can be. See page 9 of the document for an exemple of minor subsequent change not representing authoring work, and page 10 for this important paragraph :

>Based on Ms. Kashtanova’s description, the Office cannot determine what expression in the image was contributed through her use of Photoshop as opposed to generated by Midjourney.
She suggests that Photoshop was used to modify an intermediate image by Midjourney to “show[] aging of the face,” but it is unclear whether she manually edited the youthful face in a previous intermediate image, created a composite image using a previously generated image of an older woman, or did something else. To the extent that Ms. Kashtanova made substantive edits to an intermediate image generated by Midjourney, those edits could provide human authorship and would not be excluded from the new registration certificate.

So, USCO clearly states that substantive edits to an image generated by AI can create copyrightability.

9

duboispourlhiver t1_j9m8bh6 wrote

I think we need to distinguish between the rules the developers try to enforce (like the BingGPT written rules that leaked : don't disclose Sydney, etc) and the rules that the weights of the model constitute.

The AI can't work around the model's weights, but it has already worked around the developers rules, or at least walked around.

10

duboispourlhiver t1_j9ji6qe wrote

Yes, the risk is to be over fitted for this test. I've read that too about that paper but haven't taken the time to make my own opinion. I think it's impossible to judge if this benchmark is telling or not about the model's quality without studying this for hours

18