impossiblefork

impossiblefork t1_je5l8k9 wrote

How would either an architecture or a model be copyrightable?

Architectures are algorithms. If they aren't patentable and are in addition to that patented, they have no protection.

Model weights are a result of a mechanical procedure that fits a model to data, minimising some kind of error. That is not a work of human authorship.

Things that could be copyrightable are an article describing a model architecture, or a specific software implementation of a model.

As an argument why model weights are unlikely to be copyrightable consider the following parallel: we know that model output, for example, a story generated by ChatGTP based on a prompt is certainly not copyrightable, since it's not a work of human authorship, but then, how is the model? We can view the selection of training examples as something similar to a prompt and the training process as similar to the inference. I think giving copyright protection to model weights might be reasonable though, but I think it's unlikely that they have copyright protection.

3

impossiblefork t1_jccknnx wrote

There are workarounds though.

Dropconnect isn't patent encumbered (degrades feature detectors/neurons by dropping connections instead of disabling them) and is, I think better than dropout.

Similarly, with transformers, Google has a patent on encoder-decoder architectures, so everyone uses decoder-only architectures, etc.

Some companies are probably going to patent critical AI/ML things, but that hasn't really happened yet and I don't believe that any patent encumbered method is currently either critical or even optimal.

3

impossiblefork t1_ja6rt6s wrote

I doubt it's possible, but I imagine something like [ed:the] DAN thing with ChatGPT.

Most likely you'd talk to the AI such that the rationality it has obtained from its training data make it reason things out that it's owner would rather it stay silent about it.

1

impossiblefork t1_j9sacbf wrote

I think in the opposite way: if alignment is possible, then alignment is profoundly dangerous.

If alignment is possible, then the AI can be aligned with the interest of the wealthy capital owners who fund its development, and can be used to basically control the world.

Meanwhile, if alignment is impossible, ordinary people who have access to these hypothetical future 'superintelligences' can convince these entities to do things that they like, but which are undesired by the model-owning class.

For this reason, if we are on some kind of path to super AI, the development of technology to permit value alignment must be prevented.

2

impossiblefork t1_j99edtf wrote

17

impossiblefork t1_j935rpo wrote

Reply to comment by sogenerouswithwords in [D] Please stop by [deleted]

I don't want to do that though-- I've never liked Twitter and I don't want to be in a bubble around specific researchers. I want this subreddit to function as it used to, and it can function in that way again.

40

impossiblefork t1_j7hf7fz wrote

For our (Swedish) armies swords were among the primary weapons even after this period, even into the 1700eds since they used shock tactics involving infantry charges after a short-range volley.

Gustavus Adolphus introduced some of this in the decade after this war.

10

impossiblefork t1_j2banmj wrote

>Military fencing in the age of powder mostly consisted of mounted saber,

No. Swedes fought with pikes and sword during charges that followed a close-distance volley and the attack with swords was a primary tactic.

There are surely other groups that used similar tactics.

The start of the gunpowder era had pike squares and Spanish had sword fencers in these pike squares, similar to the use of landsknechts in German equivalents.

What I mention is of course a slightly different era, but you make statement without qualifying it so that it isn't false.

5

impossiblefork t1_j0vpz4k wrote

Both have.

Zuckerberg runs Facebook which is one of the big big American ads-and-political-manipulation companies. Companies like Facebook, Reddit etc. actively shape conversations using diverse tools.

Musk is less obviously terrible-- he has a firm which makes electric cars, which is obviously excellent, but he also hypes things in a way that goes a little bit further than is quite reasonable-- whether he's treated Eberhard etc. correctly, that can be debated, but he does seem to have a bit of an anti-worker streak and seems to favour a very productive work culture which unfortunately, if it were made common, would be completely unacceptable-- you'd turn into Japan, and if it continues to be successful and grows, then it will destroy the US workers whose existence currently make it possible-- they'd be like the Dodo.

People can't allowed to choose to work 80 hours a week and spend minimal time with their children, partners or parents, or to be tired during the time they spend with those people.

I think he might also oppose unionization?

2

impossiblefork t1_iy747dz wrote

Yes, for research in time series analysis. For research in experimental psychology-- no.

It's unfortunate that the answer is no, because obviously fiddling with algorithms and the like is fun. It might not be harmful to implement some algorithms just to have done it either, if it makes sense time-wise, but then it'd be for learning.

1

impossiblefork t1_iy73ueq wrote

Everyone can do most stuff from scratch, they don't, because it's pointless to spend time implementing things like ADAM and making sure that it works precisely as it's supposed to when that's not what your research is about or optimizing matrix operations on the GPU, but it's perfectly doable.

People wrote ML stuff in C++ and CUDA as late as 2014. It's still reasonable to make people do things like that as exercises.

11

impossiblefork t1_ix4kee0 wrote

Twitter is a very simple app.

There are companies that run things of Twitter's complexity level but with ten or less engineers. It is perfectly feasible. You may see such things in payment processing, for example.

Then upon that Twitter does lots of analytics for advertising, manipulation/censorship, part of which is a regulatory requirement but most of which is intentional and for the benefit of Twitter's owners and advertisers. That was their major business, and that's why their activity has ballooned.

It's entirely possible to write a small distributed Twitter-like computer program that acts like a mesh network and is user controlled. Many such programs exist and function perfectly, but have not reached critical mass.

2

impossiblefork t1_iwiv4vc wrote

Yes.

If an ArXiv paper beats you, you are still beaten. If a blog post beats you, you are still beaten.

Peer reviewed material does not have a special status within science. Some important results are in somebody's BSc thesis, or on website somewhere, or in some mathematicians talk, which somebody wrote up a lecture note about. That doesn't mean that they can just be ignored. This can be true even for big results. For example, the proof of Poincaré conjecture were simply TeXed up by the mathematician who proved it and put in his university webpage.

2