natepriv22

natepriv22 t1_jeayzdd wrote

I guess it should be easier to come up with new ways of making money in an AI world rather than coming up with a completely different economic system or shifting the reality of economic rules to fit a subjective imaginary world view and unrealistic ideal of what and how socialism or communism should work.

In the same way it was almost impossible for someone working in agriculture hundreds if not thousands of years ago to imagine today's software jobs, it's probably as difficult for us to figure out what jobs in 5,10,20,30,etc years from now will look like.

2

natepriv22 t1_jeatf4l wrote

Money is mainly used as a medium of exchange.

It's unlikely that things will be "free" as in no sort of exchange of value, because for a transaction or exchange to take place between two or more parties, they need to each believe that there is a personal or community benefit from this exchange.

For this reason, some form of money will still exist, so that exchanges can properly be performed between parties.

2

natepriv22 t1_jeagx0r wrote

Demand is based on the infinite wants and desires, (plus values, needs, and utility) whether physical or abstract or both. Demand can be influenced by grounded or imaginary wants and desires.

That's on the larger and broader scale, in the smaller scale, it could be influenced by any external and internal stimuli, which moves the broader scales.

Example: a students sees a friend has a nice pen, and it creates a desire to get that pen themselves.

Labor influences prices, but it does not determine the value of a good. Labor can influence what people demand, but it doesn't create demand itself.

If Labor and demand are not separable as you say, then do unemployed people, children, and old people, have no wants, desires or needs?

Humans will always "demand" whether they are working or not. The demand will change, but it will not fundamentally disappear. If AI and robots make everything, we would still want to have light and hot water in our homes.

Now you might say as others on this sub have "but what if everything can be made instantly by AI". The law of supply and demand states that one influences the other, and that one cannot exist without the other. Therefore, demand will proportionately scale with the supply. If AI can create anything we can currently imagine, then our imagination will extend beyond that. "But what if our imagination cannot stretch beyond AI", then we will demand that our imagination can be increased, maybe by merging with AI.

2

natepriv22 t1_jea00nf wrote

That is only true if you base your understanding of economics on the labor theory of value. A theory which has been properly refuted for almost 100+ years now.

Our economy is not purely based on human labor like you and Marx claim.

It's based on demand and supply. You can totally have a capitalist model that doesn't involve humans as workers. They could instead be investors and shareholders.

2

natepriv22 t1_jd2245c wrote

https://foreignpolicy.com/2012/02/27/were-all-the-1-percent/

There are so many factors no one in this subreddit is considering tbh. For example, its somewhat ironic because American wealth of most citizens is probably and comfortably in the 10% of the world, and at least 10% of Americans likely fit in the 1% of the world. This is a very American or western centric post and also comments. There's nothing wrong with that, except it's totally biased and makes most of the purpose of this discussion null.

Not to mention the fact that for most people in the world, the barrier to entry into wealth is actually authoritarian governments who limit or ban the market approach. Thus never giving any "common person" the opportunity to rise above the poverty line. A lot of countries also never get the opportunity to industrialize properly, and remain agricultural based economies which are understandably poorer than industrialized or service based economies.

Did you know that China's middle class represents more than 50%+ of the population? It's quite ironic again, that people are always shouting "class war" when the majority is the middle class, who represents an in between of rich and poor. In most western countries if not all, the middle class is 50%+. It's not hard to imagine considering that 3% of China grew to 50% in 2018, that this is a global movement, and the rest is likely to happen in other countries such as India and in Africa and Latin America more generally.

https://chinapower.csis.org/china-middle-class/

But you've forgot one element that is most important. Historical wealth. If you jot down a line of today's 8 billion people we would own more than 99% of global historical wealth. Yes today, all 8 billion people are in the historical 1% of history.

2

natepriv22 t1_ja2gxek wrote

That's if you take nuclear fusion in a vacuum. But if we are assuming that other tech advances as well, spillover effects will change everything.

AI is already helping with nuclear fusion right now. Also don't underestimate how much we will be willing to spend to fight climate change.

I'm here referring to Ray Kurzweils Law of Accelerating Returns.

7

natepriv22 t1_j6jb8rn wrote

Absolutely not! Of course doing nothing and waiting around would be def defeatist, and probably the worst idea here.

But it's a good idea to start preparing and doing research. Game out future scenarios, so that you can better anticipate or at least adapt to change when it happens.

Best not to specialize or become an expert in just one field or industry. The chance of it becoming automated is very high. If someone specializes and gain knowledge and experience in many fields they will be more adaptable and ready for change. Learn always!

3

natepriv22 t1_j64prx8 wrote

Uh no. If a company developes AGI, they will become the most important company in history.

If you can't imagine what an actual AGI would be like and what their effect on society would be (nobody can accurately predict that of course), then you cannot make this claim about profits.

What if the AGI decides it likes OpenAI and thats the company that should get the first sci fi level fusion reactors. When talking about AGI you just cannot seriously make this kind of a prediction imo.

1

natepriv22 t1_j5xrs8o wrote

>Human beings are not ai, I don't think that the two can just be compared.

Absolutely they can be compared though, they are two forms of intelligence, one of those is built on the principles of intelligence of the other.

>A human being being influenced by another artist is not the same as an ai, and a human being can't copy another artist as accurately, broadly and quickly as an ai can.

It's not the exact same sure, but its broadly similar. You don't store 100% of the info you learn and see because it would be too much data. So you remember processes, rules, and outcomes much better, just like an AI would.

>Even if you practice Van Goghs work your entire life your work will never actually look like his there will always be noticeable differences. There's a lot of artists who even do try to directly copy other artists styles and it's always very apparent and like a worse copycat.

I mean, the average person and I'm pretty sure the both of us too would not be able to distinguish the original from the copied one, unless we had more info. You can do a simple test online, and let's see if you manage to distinguish the two. If you do get a high score, then congrats! You are better at spotting copied art than the average human is.

Furthermore, what you're describing is exactly how AI works. Unless you use an Img2Img model, which is not what the majority of AI art is, then you would never, or it would be close to impossible for you to produce the same output, just like a human. Again, you could test this right now. Just go on an AI art app like Midjourney or Stable Diffusion, and type in "Van Gogh Starry Night", let's see what outputs you will get out of this.

>it can be fed with an artists work and spit out finished illustrations in that style in seconds.

First of all not exaclty, as I've said before, the model never contains the original input, so it's only learning the process, like a human.

Second of all, you can do the same thing! It'll just take you more time. Your friend gives you 100 pictures of a new art style called "circly" which is art purely made with circles. He will give you days, weeks or months, however much you need, to output something in this new style. He wants a picture of New York only made with circles. So you learn this style and create the new drawing or painting for him. You did almost the exact same thing an AI did, except it took you longer which is normal as a human being.

>What is the point of hiring the artist who's work was input into the ai for it to learn from it?

What is the point of hiring a horse carriage driver, when the concept of how a carriage works, was used to create the "evil car"?

First this is a loaded and emotional question. All kinds of art was used without discrimination, no one was specially selected.

Secondly, again, the model will not be able to output the same thing. It can draw in the same style, but the output will not be the same, it just mathematically won't be. So there is economic value in the original work too.

If a process or job can be automated, and there can be a benefit for humanity, why should we stop this development. Where were you when the horse carriage was being replaced? Where are you, fast food workers are getting automated too?? Why is it ok for others but not for you? And if it's ok for no one, do you think we should regress and go back in the past?

>Not to mention that it also competes them out of their own search tag,

I literally have never met a person who searches the art from someone outside of their official channels. Even if they do, then that's a marketing challenge. But what's the difference with popular artists that were being flooded with copies from fiver then?

A style is copyrightable btw, and thank gosh for that. So if they're getting flooded with "copies of their style" that's a lie. It's not their style, it's the style they use and maybe even discovered. But they have no copyright claim. Imagine a world where Disney could copyright drawing cartoonish styles... or DC comic styles... is that what you want?

1

natepriv22 t1_j5qmutp wrote

It does though...

It's not scraping writing, it's learning the nuances and rules and the probabilities of it in the same way a human would.

The equivalent example would be if a teacher tells you "write a compare and contrast paragraph about x topic". The process of using existing understanding, knowledge and experience is very similar on a general level to current LLM AIs. There's a reason they are called Neural Networks... who and what do you think they are modeled after currently?

0

natepriv22 t1_j5o7jzl wrote

Just to add:

If I made it really confusing by being all over the place:

Style = like math, discovered

Art product or output = like an idea, invented

Creativity combines the use of a style, to produce a product or output that expresses something. Without the product or output, what can a style express?

Imagine trying to explain Van Goghs style and styles without his product or output. It would be very mathematical and scientific = turbulent lines + bright colors + lowering of clarity filter

2

natepriv22 t1_j5o72qn wrote

No, the final output is what's copyrighted. It's impossible to copyright style because it's too much of an abstract.

Example: Disney copyrights drawings of Mickey Mouse. Mickey mouse is a character that resembles a mouse, walks upright, has little mouse ears, has a boopy nose, red pants, and yellow shows.

This is a character, that Disney has come up with and which is unique. If someone were to draw something according to these exact specifications, then it is very likely that they would come up with a drawing closely or almost completely resembling Mickey Mouse. By trying to redistribute something so obviously similar, you are in danger of breaching someone's copyright.

On the other hand a style could be cartoons, or lets make it at the simplest level possible, drawing only with circles.

While you may have been the first to use a style, you have no copyright claim over it. It's a very abstract thing, but its more far removed from the artist. The style is a medium to produce a creation, it's more like a tool, but not the ultimate product. If you and Disney both started drawing with circles, you would ultimately come to very different products, no matter how similar the goal may be (draw a mouse using only circles).

In other words, styles are almost mathematical arrangements of colors, movements, dots, etc. You use this mathematical formula to produce a character for example. This character is unique, it's very likely only you could have come up with this. The style is very likely to be discovered by other people. Trying to copyright a style would be like trying to copyright a math formula.

TLDR: sorry for the messy writing, but I was trying to put all my thoughts together into one. For these reasons, AI can never truly plagiarize or infringe copyright on its own. Styles are non copyright able and that is almost exclusively what matters to the AI. Arranging math to try and satisfy your output desire. Unless it has a reference point it will pretty much never be able to come to the same conclusion you have come to.

Extra: imagine a world where style is copyrighted instead of just the product or output. It would be the destruction of creativity and art. Imagine if Disney was able to smartly copyright a cartoon or 3d cartoon style. They would be the only ones able to create cartoons and 3d cartoons in the industry, gatekeeping and locking everyone else out for risk of lawsuits.

Now that would be a true dystopia...

2

natepriv22 t1_j5ntkrw wrote

No, they just want to use these models for their own profit, while making fan art generation or creation illegal.

They know they can't stop their pictures as used for learning, because they're publicly available. There's legal precedent for this.

What they care about is that you can generate an iron man style picture and post it online, without their licensing for such a character.

What's ironic is that this lawsuit will fail anyways, even with corporate backing, as I just mentioned, it can't generate any exact pictures, but only "style like" pictures.

2

natepriv22 t1_j5ntbw0 wrote

No that's basically how none of these AIs work. You don't understand how machine learning works. Please stop spreading misinformation and do some research first.

If the AI is plagiarizing then so are you in writing your comment, as you sure as heck didn't just learn to write out of the blue.

The model never contains the original text, can you imagine how huge that would be? Nobody would be able to run it and def nobody would have enough money to access it. The model uses a noise and denoising algorithm, and a discriminator algorithm to make sure the output is the most likely correct output.

So its literally not possible for it to commit plagiarism because it doesn't contain the og text. For it to be accidental plagiarism, it would have to accidentally generate the exact same output, with no memory of the original input, except for an idea of turning noise into comprehensible text.

To put it in other words, that would be like you writing a paragraph that is word for word a copy of someone's else's paragraph, without you ever having any memory of said paragraph, except for a vague idea of how to turn a bunch of random words into comprehensible text. The chances are slim or next to mathematically impossible.

Furthermore, these models almost all dont have access to the internet, especially not chatgpt or gpt3. It's explicitly stated that the data cutoff is 2021, so it has not even been trained on newer articles.

The most likely explanation therefore is that CNET employees were really lazy or naive, and literally copy and pasted the other articles text into chatgpt or gpt3, and then wrote simple prompts for it such as "reword this for me". That's the true issue. I know that it's most likely the case because I've tried to reword text a few times with chatgpt, and sometimes it just doesn't manage to find a way to properly remix the text without making it sound too similar to the original. This only happens when I feed the text word for word, and I use a very lazy prompt. When I make a more complicated prompt, it's able to summarize the text and avoid copying it, just like a human would if they were asked to summarize a text.

So this is what's going on, not other things. Knowing reddit, even with this explanation it's unlikely that people are gonna believe me and will be unwilling to do their own research. If you wanna prove me wrong, here's a challenge. Make it generate an article about anything you like. Now copy and paste elements of that paragraph in Google search, and see how many exact results come up.

4