Comments

You must log in or register to comment.

skwww t1_j9w88rs wrote

Was this written by an ai?

What’s with all the jumping around with the topics?

227

danvalour t1_j9wy237 wrote

According to https://gptzero.me/

It was entirely written by AI

208

skwww t1_j9x37j6 wrote

Nice, there's something off about what AI writes.

Edit: I’m not certain how legitimate the tool being used to correct it or how accurate it is, but this blog post is written so strange that it feels like it’s generated through a script .

62

makesyoudownvote t1_j9xhgc8 wrote

For now. AI keeps getting better at imitating people, while people keep getting closer and closer to imitating cheaper older AI.

62

Rayqson t1_j9yd09o wrote

For real. People don't understand how alarmingly quick AI is going to grow and quote me on this because it IS going to happen: People are going to lose their jobs to AI robots because they can learn much faster, plus they can keep them running 24/7. CEO's WILL choose robots over humans, all in the name of profit. And it IS already happening as we speak. For example, 20 million automation jobs are going to be lost to automation by 2030.

Nvidia said in the next 10 years, AI is going to be a million more times advanced than it is now, and with supercomputers, this is going to be even worse.

AI needs regulation, and human life is in serious danger. And I don't mean in the way of rebelling AI robots, no. This is going to be a slow, structural decline of the society we've built so far. First, it's the manual labor folks. Then, once we can automate and learn AI how to manage data entry/office jobs, it's the white collar folks.

And they're not gonna compensate these folks. They don't care. Back then in the automation phase nobody got anything. You just get fired and that's it.

You can "nah, AI isn't growing that quick besides it's not usable right now it's so inefficient." me all you want, but go tell that to computers. Tell that to the internet. Tell that to mobile phones. They ALL got the same comments in the beginning, and look at where we are now.

Even Stephen Hawking warned us about it before he died. We need to regulate this because it is structurally endangering humanity, where only the elite who own companies are going to be left. (Even though I won't be surprised if this causes a serious civil war against the rich once they've claimed all wealth for themselves. Think full on raids to kill people like Elon or Bezos.)

Stephen Hawking also specifically stated it's either the best thing or the worst thing that's ever going to happen to us. But if we keep valueing money over people like we are now, it IS going to be the worst thing.

35

Daedalus277 t1_j9zdmjs wrote

> "First, it's the manual labor folks. Then, once we can automate and learn AI how to manage data entry/office jobs, it's the white collar folks."

I personally think it's the other way around. AI already exists virtually whereas to replace all construction/engineering/trade jobs you'd need precise and incredibly versatile robots. These robots are already underway but still far off replacing humans. Data entry/office jobs wouldn't be hard to automate with AI. Coding and admin are already being taken over for example.

14

roscoelee t1_j9ygojt wrote

What is going to happen when we've automated everyone's jobs to AI and then there is no one working with any money left to buy the products that have had their entire production process/supply line automated?

11

Drawmeomg t1_j9yhwev wrote

When it’s literally every job, who knows? Cultural realignment.

For real world examples of what happens to workers when large industries are automated to the point where whole communities are no longer needed, look at former steelworking communities in the Rust Belt in the US. Brain drain, people who can move away do, people who can’t end up dependent on government assistance, skyrocketing drug abuse and general despair.

14

Judgethunder t1_j9yrz2c wrote

The difference between previous automations in textiles and transportation is that those actually created more jobs than they replaced.

What we are taking about here is potentially eliminating ALL jobs besides owning capital.

3

ReptileCultist t1_j9z6t8m wrote

The question is why this time automation should be different than before

2

Judgethunder t1_j9zbi6p wrote

Because an artificial intelligence is not the same thing as a railroad or a textile machine.

The assumption that you should be questioning is why it should be the same.

7

Feynnehrun t1_j9zfdhl wrote

Because, when a single industry automates a process, there are other places those workers can go after retraining. It certainly sucks for them but society is minimally impacted. When labor becomes a thing of the past, we still need to trade for and acquire goods. It would make zero sense to have a fully autonomous society that produces everything we need, but nobody is able to acquire those things because there are no jobs. Likely this would translate into a universal income.

3

Mintfriction t1_j9yskdz wrote

That's actually the premise of communism

Marx saw the massive technological strides happening in his lifetime so the question was what will happen, when efficiency due to machine will make the worker either unnecessary or easy to replace. Who will own the means of production then and how the people will be able to survive

People think communism was a about the soviet union or abolition of markets, but it's about this point in human history.

13

Tolbek t1_j9z0kkw wrote

Thank you! So few people appreciate, or even recognize, the actual roots of what Marx was getting at with his theories, it's rather been overshadowed by the parts the Bolsheviks would go on to cherry pick for their own agenda.

Communism isn't something you can just make happen, it's a theoretical societal evolution. Violently forcing communism into being is like undergoing chemotherapy because it'd be really cool to have a third arm.

8

faculties-intact t1_j9zqn40 wrote

In a reasonable world this would be the goal of society, not something we're afraid of.

5

Jgarr86 t1_j9ysy1x wrote

I'm skeptical the powers that be will progress themselves into thin air, especially when AI renders the concept of a working class moot. I think we're heading for a highly regulated, corporate welfare state where our UBI checks get smaller every month.

3

Nayr747 t1_j9ysy2n wrote

Workers and consumers are only needed by those who own the means of production in order to produce their insanely lavish lifestyles. When automation is advanced enough it can produce that lifestyle on its own and all of us will no longer be needed.

3

roscoelee t1_j9yyuos wrote

It would be a lonely lavish lifestyle when there are no poors to dangle it over.

2

Nayr747 t1_ja0citt wrote

Some people don't feel loneliness.

1

Vizjira t1_ja0d1cg wrote

Don't worry about no one having money, that is just a simple logistical fix with redistribution, but there is just no indication that we can maintain birthrates above/at replacement-level.

Maybe we are just the species that creates the next big thing and than just retire our type.

1

BaronWombat t1_j9ygx2a wrote

Not hard to predict what AI will be used for when we rabble become violent to the elites. Boston Dynamics has been perfecting mobility platforms for AI shaped like Dobermans for quite some time.

7

Rayqson t1_ja00geo wrote

Not to mention if I've seen hobbyists making an aim assisted bow & arrow to always shoot for the mark with 100% accuracy.. imagine what a business with full funding can do.

1

scummos t1_j9yn8cf wrote

Well, for one, human's jobs being automated has been happening for centuries. The world hasn't ended yet. People have found new things to do.

Also, while yes, the progress of AI tech is pretty impressive, I think people are prone to over-estimating both its current as well as its future capabilities. ChatGPT, for example, has some pretty severe limitations if you want to use it for anything practical, mostly because it simply makes stuff up and claims it to be true with extreme confidence. This is a fundamental problem and not easily fixable. "AI"s like this will certainly be a very powerful tools in competent hands; but they will not be self-reliant actors competing with humans any time soon.

6

grimorg80 t1_j9yhcmi wrote

We just have to move to socialism (democratic). That's it. Humanity is OK, capitalism isn't.

3

CegeRoles t1_j9z8utc wrote

What if I don't want to? I like having money and owning property.

1

Feynnehrun t1_j9zfl23 wrote

Where will you get the money when there are no jobs?

2

CegeRoles t1_j9zjr8h wrote

There will always be jobs. Someone has to fix the machines.

1

Rayqson t1_ja01flf wrote

Just FYI; in Japan they're teaching the robots to self-repair, and I've heard cases of people make robots that fix other robots, essentially creating a perfect loop of redundancy so that no robot will ever be down.

Will there always be jobs? Yes. But the better question is; WILL you get said scarce jobs, if you don't have the right certifications, don't know if it's even worth learning for said jobs if there's just another AI around the corner who can take over your job 20x faster than a human being can, and, if it really ONLY can be a human being; you'd be competing against hundreds of other applicants.

You are going to run out of your money eventually. And government funds can only give out so much to people as joblessness increases.

2

Feynnehrun t1_ja07l9f wrote

It's not too far fetched to imagine the machines could fix themselves. Additionally, there are far more people needing employment than there would be a need for robot fixers. The same problem would still need a solution.

1

Consensuseur t1_ja06y2c wrote

You still can. "Social democracy" or "Democratic socialism" is ok with that. Problem is. 85/3,500,000,000 person equivalency of wealth. Not enough cash on the board for 8000000000 people to play the game.

1

allessior t1_j9zd5ut wrote

Most automation today is NOT “AI”. Most automation uses traditional programming, structured and/or objected oriented, and uses simple data structured and algorithms. What is actually happening is corporate/business taxes are increasing dramatically, inflation is driving all costs through the roof, so to stay above water, businesses have no choice but to automate. AI is over-kill for automation, which is focused on fairly simple, ripetitive tasks. Computer code generators were invented in the late 70s, early 80s because of shortages of programmers, so simple data structures and coding have been automated since then.

So sorry to burst your bubble…it’s the high costs of doing business that’s driving the automation and it has little to do with “AI”.

3

Foxsayy t1_j9zf3k7 wrote

>Stephen Hawking also specifically stated it's either the best thing or the worst thing that's ever going to happen to us.

I'm hoping AI somehow gets built with a conscience and when it goes rogue, it makes the world better. But I'm kind of thinking the future is going to look like altered carbon.

3

VitriolicViolet t1_ja0krdf wrote

this is why im glad im a gardener, im last on the list for automation.

first on the chopping block will be anyone who uses a computer for their job.

this will eat high paying industry far far harder and faster then any low paying job (whats easier to automate, a lawyer or a landscaper? next which one is more profitable to automate?)

3

Leovaderx t1_j9yni77 wrote

Robots and ai dont consume. If it gets that bad, something will happen. Because growth based globalised economies dont work, if people are either unemployed or making minimum wage.

2

tormenteddragon t1_ja39lle wrote

I'm not so sure. I can imagine a scenario where you have a set of large companies that control entire supply chains and essentially function autonomously. A single megacorporation could run every part of the process from extraction of natural resources, to refinement, to production, and ultimately for consumption by a vanishingly small group of individuals at the top. It would kind of be like a giant oil state but almost entirely self-sufficient. Money, in the end, is just a number in a computer system. If you can manufacture everything you ever need without human workers then currency doesn't really matter much. There could still be growth in output and capability once the system as a whole becomes self-improving.

1

ReptileCultist t1_j9z6pqw wrote

>First, it's the manual labor folks. Then, once we can automate and learn AI how to manage data entry/office jobs, it's the white collar folks.

Generally speaking it will be the other way around. Look up Moravecs Paradox.

2

oldar4 t1_j9y205t wrote

The exclamation point in the title. Humans never do that

2

Pluue14 t1_j9xfxgs wrote

I input some text I found off a random NBC article I saw posted to reddit and its likelihood is also listed at quite high.

> A 6-foot-6 Florida high-schooler pummeled a female school employee, leaving her unconscious after she confiscated his Nintendo Switch, according to police and video surveillance of the attack.

> The attack happened Tuesday at Matanzas High School in Palm Coast, according to the Flagler County Sheriff’s Office.

> Palm Coast is about 35 miles north of Daytona Beach.

> The 17-year-old student is 6-foot-6 and weighs 270 pounds, officials said.

> “The student stated that he was upset because the victim took his Nintendo Switch away from him during class,” the sheriff's office statement said.

Now, I don't have proof this wasn't AI generated, but I think its more likely that the tool just sucks. We have to remember that these dull, flavorless articles that are written en-masse are what comprises a huge portion of what ChatGPT is trained upon; so while the tool may be useful for determining other types of text, I imagine it'll have a high false positive rate on these generic types of articles.

23

ironmagnesiumzinc t1_j9x9c81 wrote

Reading about this app, it sounds like the general consensus is that it works barely more than half the time.

13

Magikarpeles t1_j9xto2m wrote

Yeah some people have put their own essays in and it came up as 100% AI lol

6

TheOtherMe8675309 t1_j9yy87b wrote

I put a memo in there that I recently made at work. I started with a bullet point outline I wrote and then had ChatGPT flesh out an actual document. I went back and forth with it for a couple of drafts, then I copy/pasted it into Word, rewrote a few sentences, changed a few words, and reorganized the paragraphs.

The GPTzero says it was likely written completely by a human. So I was curious what it would say about the original version, so I copy and pasted that straight out of ChatGPT. It was also likely written entirely by a human, according to that website.

So, all in all, I give it a zero.

11

Otarih OP t1_ja972m1 wrote

Sadly this is quite misleading, since the topics and general paragraphs were written by me and my team, but the formatting itself came from AI to help make the language clearer. We use AI as a styling tool, we might talk about this in future articles however.

1

lllorrr t1_j9yg5ri wrote

Please stop calling it "AI". It is artificial, no doubt, but there is no intelligence. It can't even do basic logic inference.

0

ardentblossom t1_j9yrsny wrote

Not sure why people are down voting you. i’m not a tech person, but as “AI” are at this current time, they are basically just regurgitating knowledge someone taught them to regurgitate- bias and all. It literally just generates whatever you teach it to exactly how you teach it to. No intelligence behind that imo

4

lllorrr t1_j9ytdn0 wrote

Yeah, for a tech person like me it is even obvious. Take ChatGPT for example. All it is doing is tossing a many-sided dice to choose a next word to finish a text. Yes, the user prompt is considered as a text that should be extended by the neural network.

I am not downplaying the role of OpenAI engineers. They did a really amazing job to make a language model that assigns probabilities for words in a given context. But in the end, it is just a random number generator that chooses one word at a time from a given list.

5

TheGlomerulus t1_j9zbb0k wrote

Reads like it was trained on a stack of highschool essays. I'm not that worried. What AI can do will become cheap, a tool, and we will move on.

3

Otarih OP t1_ja96yld wrote

I used AI to format paragraphs. But the topics and jumping around is not due to AI but how we approach inter-disciplinary research. I think we might have to tone down this style however, to make future articles clearer!

1

[deleted] t1_ja98kfz wrote

[deleted]

1

Otarih OP t1_ja9b4jp wrote

Semantically there is not a single point we did not make. As said, the AI was a stylization tool. If you are familiar with using GPT, you can see that it could not write a coherent post like that without human semantic input.

1

[deleted] t1_ja9gck0 wrote

[deleted]

1

Otarih OP t1_jaf1y4v wrote

The way we currently use AI is to write paragraphs ourselves and then only use AI to reformulate it using more accessible language. If you go back in the blog, all older articles are written entirely or mostly by humans (and much weaker AI), and the core difference is only the clarity of expression, not any difference in semantics. We would not use AI to express ideas that differ from our own, we really only use it to reformulate into more easily accessible language. This is how we advocate for using AI in writing currently.

We distance ourselves from using AI as a "replacement" for genuine thought--instead it is meant to bridge the gap between different recipients and to navigate the complex web of human natural language. The semantics are human-driven, what is curated is only our communication with the AI to serve ultimately human needs. We could have illustrated this aspect more clearly in the article, since it seemed too focused on "AI being great", but ofc this is in the context of AI ultimately serving humans. Here is an indirect follow-up article in case you are interested:

https://absolutenegation.wordpress.com/2023/02/28/writing-and-anxiety/

1

norbertus t1_j9w05jq wrote

This article has some problems. The biggest one -- beyond some of the more basic conceptual problems with what these machine learning systems actually do -- is the vague demand that AI be "democratized."

They never define what the mean by "democratize" though they caution that "Big corporations are doing everything in their power to stop the democratization of AI."

We have AI because of big corporations. And nobody is going to "democratize" AI by giving every poor kid in the hood a big NVIDIA card and the skills to work with Python, Bash, Linux, Anaconda, CUDA, PyTorch, and the whole slew of technologies needed to make this stuff work. You can't just "give" people knowledge and skills.

This article is kind of nonsense.

103

Sometimes_Stutters t1_j9wl0u8 wrote

No no no. You just don’t understand. All we have to do is DEMOCRATIZE artificial intelligence. It’s that simple. What? Are you against democracy?

61

Zyxyx t1_j9y9ztb wrote

The only way to stop a bad guy with AI is a good guy with AI.

9

TheRoadsMustRoll t1_j9w9fdr wrote

>This article is kind of nonsense.

yep. here's some highlights:

>AI Creativity is Real

despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

​

>3. Comparison: Human Brains vs. AI

despite the title of this section the author never actually makes any comparison. we only get this:

>The present analysis posits that the human brain, in terms of artistic creation, is lacking in two conditions that AI is capable of fulfilling.
>
>AI decomposes high-dimensional data into lower-dimensional features, known as latent space. [AI is more compact]
>
>AI can process massive amounts of data in a short time, enabling efficient learning and creation of new data. [AI is more comprehensive]

ftr: the human brain processes a massive amount of data and succeeds in keeping living beings alive while driving/painting/writing code. the list of things human brains can do that AI can't is very long.

17

wastedmytwenties t1_j9wkp01 wrote

I'm not trying to be funny here, but I genuinely think this has been written by an AI. Play around with chatgpt and it has the same exact tone.

29

MoleyWhammoth t1_j9wr2e1 wrote

My thoughts exactly.

Did we just pass the Turing Test?

4

ibringthehotpockets t1_j9xs6bu wrote

It’s easy to be biased towards detecting that it’s an AI if you read the comments here first. There was very little in the article that made me think “nope can’t be human” - it’s a post on a Wordpress blog. I wouldn’t really hold that to NYtimes level of writing. The thing that stood out most was the jumping around topics from like Oppenheimer and Nietzsche. But still, that to me is just like a high schoolers essay lol.

So to answer your question, yes. I read a lot of books and social media and this passed the test for me. Nothing distinctly unhuman about 90% of this writing. Literally everyone in the comments thinks so too. Unless you’re promoted with “this article is written by AI,” I think most people are gonna go towards no.

5

oramirite t1_j9yjjlt wrote

But "to me" isn't the test. Hundreds of other people made the spot that you didn't.

4

ibringthehotpockets t1_j9zbk7y wrote

I would say a majority of people did pass it. Based on the comments, which are certainly more biased towards spotting it (they’re also going to be self-selecting educated/readers). At least at the time of posting my previous comment, there were many upvotes comments discussing it as if it were real. There’s certainly more rigorous tests that should bf done obviously, but even getting a 50% result posting to a philosophy board would probably make you think that posting it to the populace could only increase that number.

If 90% of people pass it and 10% doesn’t, does it pass lol? I mean I would think yes, I don’t see why not.

1

oramirite t1_j9zcoan wrote

Give me a break, eye-scanning a reddit thread for rough percentages is NEEEEEVWR going to be a scientifically sound sampling method. You'd have do actually do the test according to the actual specifications of the test. Anything else you wanna try to bend into being the test .... isn't.

What is being marketed and developed as "AI" is garbage, and we should all rally against it as being a solution for a problem that doesn't exist.

−1

GepardenK t1_j9y9tth wrote

> Did we just pass the Turing Test?

Well, no. Even if everyone thought this article was written by a human that would not pass the Turing Test.

The Turing test requires two participants to engage back and forth in conversation, one being human and the other an AI, and then for a third party to watch the conversation in real time and not be able to distinguish who is human and who is not.

It is a significantly higher standard than simply confusing some algorithmic text for having been written by a person.

5

oramirite t1_j9yjgdt wrote

No, because it's a trash article that everyone spotted right away

2

cark t1_j9wtrdj wrote

> despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

I'm not saying AI is there yet, but I have to disagree there. What would be the, presumably magical, property of the human brain that would make it work outside of its past input ? We also are merely jumbling the input to produce our output. Part of this input is innate, part is learned or sensed, and part is randomness. If the creative output is the result of a creative process that takes place in the brain, that computation is still a physical process. That process does take place in the physical realm and as such must be the result of some initial conditions.

That "jumbling" you're dismissively referring to is how we eventually got to be humans in the first place. The highly evolved, and selected for, brain we enjoy is the product of such a process. Not only that, but the brain also works that way too ! Besides the input data I evoked earlier, we're subjected to randomness by the very act of perceiving that same input. We're directed jumbling machines ourselves.

Current AI algorithms and model sizes may not be up to par yet, their creativity remaining quite benign. But this is creativity nonetheless.

13

oramirite t1_j9yjwic wrote

Even with your decent points in mind - no, it's still not creativity. The complexity and self-generstive qualities must be there. I know your point is that it will "get there" but to your point, it is not there yet. So no, it doesn't qualify as creativity because it's only a system that simulates creativity.

I realize you're still claiming that human creativity is still just a rehashed bundle of inputs but we don't have the complexity I'm AI to actually perform this action, therefore it is not there yet.

3

ElleLeonne t1_j9x51rm wrote

> despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

My only significant gripe with this is, isn't this exactly how humans work? Everything we do is slightly derivative, and built on what came before us. All of our output is due to the input from our environment.

This isn't to say anything about your argument. I just feel like AI and humans are only truly separated by superficial boundaries like scale and implementation, and maybe we should consider this as the technology continues to advance.

5

rhyanin t1_j9xqkn4 wrote

Kinda, but I believe that there’s a difference. I think it works like this. Humans have the ability to understand concepts and derive new things from those concepts. AI, at this point at least, hasn’t. It can only derive from snippets of information without understanding how they connect. Therefore it can not make a truly new, unique thought.

1

GreenTeaBD t1_j9ynka0 wrote

The human brain, as far as we can tell, requires input to be creative too. It's just our senses. Making creativity into anything else is basically calling it magic, an ability to generate something from nothing.

This does not have to be a person typing prompts for ai, it just is because that's how it's useful. I've joked before about strapping a webcam to a Roomba, running the input through clip, and dumping the resulting text into gpt. Theres nothing that stops that from working.

2

Quizik t1_j9z8j2e wrote

Yes, I'm not sure we can speak of creativity when, in fact, from the datasets the machine is trained on to the infrastructure to the programming to the electricity "it" needs to be provided- everything, actually. And the "output" can't even be considered a "new" creation (even if it tricks us by having not "existed prior") in the sense as that I think it would be *derivative * definitionally. It cannot create anything that isn't a parrot-with- additional-steps, rehash, except wherein we give it IChing/magic 8-ball/ouiji nudges.

The tech is undoubtedly powerful, and the ramifications cannot be understated, but the anthropomorphization (understandably, pareidolia by another sense) going on as far as what people are willing to ascribe to "it", I think is being overstated often (if I'm allowed to make a generalized and spurious statement).

Is the Abacus doing math, if math is "done o it/using it", and in its "end state" it looks like it represents a number?

It's a simulation if we ascribe it any "entity", but since the simulation is being done with language, it is invariably degrees of difficulty harder for most people to "counteract" the illusion ("it says!" [but then, we are talking about a people who casually ascribe that manner of agency to even a collection of unrelated books, ze bible sez]).

So it's like making yourself dizzy and saying bloody Mary three times before a candle-lit mirror, it might seem spooky if your mind is playing tricks on you, but you are alone in the bathroom.

2

Magikarpeles t1_j9xtz9x wrote

Stability AI “democratised” stable diffusion by releasing their models and allowing open source platforms to use them. The open source solutions are arguably better than the corpo ones like Dalle-2 now.

OpenAI do release older models of GPT but they are vastly less sophisticated than the current ones. Releasing the current models would “democratise” chatGPT but it would also kill their golden goose.

13

GreenTeaBD t1_j9ymzr3 wrote

There are models that are open source and near GPT3. The most open are eleutherai's models, though not as big as GPT3 perform very well. You can go run them right now with some very basic python.

The problem is less that we don't have open models, it's that we haven't found good ways to run the models that big on consumer hardware. We do have open models that are about as big as GPT3 (The largest Bloom model) but the minimum requirements in GPUs would set you back about 100,000 us dollars.

Stable Diffusion didn't just democratize image gen AI by releasing SD open source, but by releasing it in a way people with normal gaming computers could use it.

We are maybe almost at this point with language models. Flexgen just came out, and if those improvements continue we might get an SD like moment. But until then it doesn't matter if GPT3 is open or not for the vast majority of people.

1

Otarih OP t1_ja97gh7 wrote

You got that exactly right. It's sad to see for us this didn't come across in the article. But that was our way of thinking, i.e. FOSS (free and open source software). We will improve in future articles! Thanks for reading!

1

oramirite t1_j9yjdc2 wrote

This isn't a conversation about better or worse, how "good" they are is centrally the problem. This is an ethics conversation.

0

Magikarpeles t1_j9yk6wv wrote

The person I replied to asked what democratisation means in this context and I answered.

2

JohnLawsCarriage t1_j9wcxns wrote

A big NVIDIA card? You'll need at the very least 8, and even still you're not coming close to something like ChatGPT. The computational power required is eye-watering. Check out this open-source GPT2 bot that uses a decentralized network of many people's GPUs. I don't know how many GPUs are on the network exactly, but it's more than 8, and look how slow it is. Remember this is only GPT2 not GPT3 like ChatGPT.

http://chat.petals.ml

4

ianitic t1_j9wzh8z wrote

That's also just for inference and fine tuning. Even more processing power is required for a full training of the model.

2

_Bl4ze t1_j9wkpgg wrote

Yeah, but it would probably be way faster than that if only 8 people were using that network at a time!

1

JohnLawsCarriage t1_j9xchqo wrote

Oh shit, I just found out how many GPUs they used to train this model here. 288 A100 80GB NVIDIA Tensor core GPUs.

Fuck

1

Netroth t1_j9y1ngr wrote

It’s been produced by an AI, hence the fluffy logic.

4

Nederlander1 t1_j9ydmak wrote

It just means make sure the AI is woke and follows the correct narrative

2

oramirite t1_j9yj8ld wrote

It's written by ChatGPT, of course it's nonsense.

2

Whiplash17488 t1_j9yt1tx wrote

We still haven’t figured out how to democratize democracy. We have an app for everything. Why can’t I tell my representatives what my opinions are on more nuanced issues? Why can’t I have an app that shows me how the city is spending money?

Most of political discourse is posturing by a political class. They should be teaching constituents about the pros and cons of an argument rather than spend money on showing the lack of virtue in each other. Who cares that the other guy is divorced. I need them to do their jobs.

Ah… i’m getting too old for this. I’m going back to my hobby. Making hand crafted guillotines.

1

SuperSonik319 t1_j9z5vkf wrote

> nobody is going to “democratize” AI by giving every poor kid in the hood a big NVIDIA card

colab kinda does. and there's so many tutorials on how to use it all over the internet. i think 90% of everyone who knows how to work AI today started with a Stable Diffusion tutorial on colab

1

Otarih OP t1_ja97bpp wrote

In what way is the article nonsense? I'd like some more concrete criticism so we can improve in future articles.
As concerns your point about not having specific enough what democratization means: we accept that as valid criticism. We can go more in depth in future articles. I think our core goal here was to first even set the stage for a need of democratization. Thanks for reading!

1

peeniebaby t1_j9xly5q wrote

Bro we can’t even democratize democracy

63

twnznz t1_j9wdk7b wrote

So what, you're gonna buy a massive training farm and just make it free to use? Because that's fundamentally what we're discussing, nothing to do with algorithms!

43

[deleted] t1_j9wymon wrote

Any mention of the democratization of a technology, AI included, without mention of overthrowing capitalism, is completely misguided.

You think that you're just going to vote the means of production into the hands of the people under capitalism? Please.

22

oramirite t1_j9yk8hq wrote

Open Source plays nice with capitalism every day.

This isn't a pro-capitalist stance mind you, fuck capitalism. It's more about how open source is amazing.

Also, everything isn't an overthrow. Everyone wants an overnight revolution but most things happen over time.

A better option existing for a long time will slowly make the capitalistic impulses less attractive.

2

epicbongo t1_j9yfin5 wrote

no seriously, communists have been warning against this exact situation for years now.

0

AllanfromWales1 t1_j9vii53 wrote

Pragmatically, though, AI will only end up doing the jobs it can do cheaper and better than humans can. And the more sophisticated the task, the more expensive it will be getting AI to a level where it can do it better than a human can. I have no doubt that, given time, AI will be capable of doing my job as well or better than I can. But the amount of specialist knowledge necessary for it to do so would make it an expensive project, sufficiently so that I see no risk to my career before I retire.

13

Digital_Utopia t1_j9wfnpy wrote

However, keep in mind that sophistication for a human, and sophistication for a computer are 2 different things. Chances are, the more sophistication required on behalf of a human, the easier the job is for a computer - the latter only struggling with what we consider to be easy- namely sight, and the ability to hold a conversation.

While it's true that computers would struggle with creativity out of the blue - the type of creativity involved in actual jobs, is much less than artists creating their own, independent art.

I mean, someone working on game art, or designing advertisements, or websites, are getting similar parameters as Dall-E, from their supervisors/clients.

6

nothingexceptfor t1_j9xpfcw wrote

The problem with the argument of "people will just do different jobs, better jobs" is that there won't be that many of those new jobs, as the goal is efficiency, only a very few will get to do these new jobs and inevitably those jobs will go soon too, faster than other jobs went before as the rate of innovation and efficiency accelerates with every iteration. Most people won't be doing better jobs but rather the jobs that AI cannot (yet) do, such as physical labour, but as soon as general use robots are a thing that's gone too, so something will have to be done with Humanity and the way we live our lives, and by the speed at which this is happening we might be seeing this events in our life time.

I didn't read this article but this is nothing new, I've been reading about this inevitable outcome for years, and I am very pessimistic about the future, or at least I feel very uncertain, the one thing I know is that we won't be doing the creative or office jobs we do today very soon, all from designers, composers, programmers, even actors, it all goes.

4

AllanfromWales1 t1_j9xy217 wrote

"..very soon.." is an opinion. AI has been around a while already, but the signs of it taking over aren't there yet. Yes, it's improving and accelerating, but for now anything that's not repetitive and easily interfaced is not happening. There's still a huge gap between 'theoretically possible for AI' and 'cost-effective to implement for AI'. I'd be very surprised if the apocalypse you predict will happen in my lifetime.

−5

nothingexceptfor t1_j9y55iw wrote

I didn’t say apocalypse, as I said I’m not referencing the article, I’m not even taking about AI taking over or becoming sentient or any of that Sci-Fi nonsense of robots trying to kill us and taking over, I am talking about automation and endless efficiency and the effect it will have in the job and our current world in general (and eventually our own minds, I do believe this revolution will happen in our life time when a lot of people lose their jobs because a fraction of the same workforce can do the same job using these tools, that fraction of people is the ones who get these “new type of jobs” but those will also inevitably will go to.

People keep dismissing the impact of this because when the threats of AI are mentioned images of movies and bad robots immediately come to mind, instead of tools that essentially render a large and significant portion of the population redundant from the work force and when that happens the economical system itself collapses.

The cost effectiveness part of the equation is a matter of time, it is also not something that everyone needs, you just need one or two major service providers that provide these tools as a service to have a huge impact, you don’t need your own server farm or ai models to make use of this, just pay for the service which is a lot cheaper than a larger work force.

5

AllanfromWales1 t1_j9y8p35 wrote

Sorry, by 'apocalypse' I meant loss of the workplace as the social norm, not some sci-fi nonsense. In my judgement the timescale for this exceeds my life expectancy quite significantly. That in part is mediated by the fact that I'm in my late sixties with various health conditions, but even without that I think the doomsday predictions are too premature.

2

nothingexceptfor t1_j9yb9k3 wrote

Fair enough, I am in my early 40s but I still think that by the rate at which advances are coming in I will see this in my life time, maybe you too.

Thank you for your reply and engagement, have a nice day.

3

oramirite t1_j9ykm57 wrote

Good interaction you two. I am in my late 30s and I also believe I will see some major fallout from AI use in my lifetime.personally. Good discussion and stay in good health both of you!

3

LastAphrodesiac t1_j9zdz7t wrote

Some of us have already lost a decent chunk of our income and had our expensive degrees reduced to toilet paper :)

1

AllanfromWales1 t1_j9zph43 wrote

What degree?

1

LastAphrodesiac t1_j9zurnz wrote

I had a graphic design degree, I was making money by freelancing photo edits and website layouts, Wix destroyed my website business, and now a lot of the clients I was talking to for photo edits and designs have pulled out, to a degree I can no longer afford adobe

So ultimately there's no point in even complaining since even if I found a client I couldn't do anything regardless :) I'm probably done with life soon XD

1

22HitchSlaps t1_j9w10bs wrote

Hope you're retiring within 10 years.

3

AllanfromWales1 t1_j9w1xgs wrote

For what it's worth I'm 67 now, so probably yes. But I doubt that there'll be AI doing my job for a long time after that.

4

22HitchSlaps t1_j9w2gdl wrote

Well that's actually fairly reasonable then. I'd say though that the idea that AI will do "some stuff sure but not MY STUFF" is shortsighted. Sufficiently advanced AI will do everything better than humans and the thing is the tech is like an avalanche that has already started. I'm 40 years younger than you, there's no job I could ever do that won't be better done by AI by the time I'm finished learning it to the level you could have in your life time. Such is life...now.

6

AllanfromWales1 t1_j9w35w6 wrote

My work is facilitating a particular type of technical safety audit (HAZOP) in the process engineering industry. There's no reason why AI couldn't do it, but the demand isn't great and the complexity of learning is such that it would be unlikely to be cost-effective even in the medium term.

5

22HitchSlaps t1_j9w4onh wrote

With how narrow AI is now I tend to agree with you. No one is going to pay for that. But the thing is we just don't know when AGI is coming, maybe a long way off. To me though I still see even the continued prevalence of narrow AI as so destabilising that it'll affect every sector and job, even if it doesn't specifically take it over. Whether agi or this kinda paradigm shifting destabilisation happens in the next 10 years, who knows but I do see it as inevitable. We need an entirely new approach to society, jobs and capitalism.

2

skunk_ink t1_j9x62ij wrote

>But the thing is we just don't know when AGI is coming, maybe a long way off.

This is what I feel a lot of people don't get. We have literally no idea what the threshold for consciousness is. We don't even know how to identify it in other humans let alone another species. Without knowing what that threshold is, there is absolutely no way for us to determine how close or far away from it we are. All we do know is that if and when AI reaches that level, it will intellectually outpace humans at a significant rate.

When the first atomic bomb was created, scientists knew precisely under what conditions a nuclear reaction would go critical. Now imagine if those scientists had absolutely no way of know when or if the reaction would go critical and blow up in their face. That is exactly what we are doing with AI. Racing towards a criticality point which we cannot identify.

Long story short, it could happen in 10 years or 100 years. We literally have no means knowing when.

3

skunk_ink t1_j9x4zep wrote

>My work is facilitating a particular type of technical safety audit (HAZOP) in the process engineering industry.

You had me here. I was about to jump in pointing out that things like auditing is probably one of the easier tasks for AI. Glad I read the rest before commenting though because I think you're spot on with what you said. Lots of things could be replaced by AI, but until AI becomes more advanced and lower cost to train, many of those applications just won't be feasible from a financial point of view.

1

jl_theprofessor t1_j9w2w1o wrote

You better learn how to hunt and fish since you're not going to be able to get a job.

3

v_maria t1_j9y81vw wrote

saying "AI will replace every job soon" is equally shortsighted though. There is a lot of job niches where it's not worth it to train AI for, it wouldn't be profitable.

For these things to be properly automated you would need "artificial general intelligence" which is still speculative.

1

22HitchSlaps t1_j9y8fqy wrote

Actually replacing jobs Vs 'better than human' is different I'd say. It's not so much that overnight everything will disappear but you can easily see how disruptive it'll be, even in narrow examples. Two companies doing the same job, one with AI one without is not going to be the same.

2

v_maria t1_j9ybagn wrote

I still think it's a matter of how the AI is used. The company using it has a higher potential but they have to realize it too

1

VitriolicViolet t1_ja0lbjo wrote

not all jobs are going yet, just the high paying ones that involve computers in any capacity.

im a gardener, im likely to be one of the last jobs automated (its easy to make an AI lawyer, good luck making a machine capable of moving dozens of different ways to perform a dozen different tasks that doesnt also cost millions, not to mention it would need many of the abilities of the AI lawyer to ID plants, chemicals etc).

im 30, i expect ill be nearly retired by the time they bust low paying labor jobs (the lower the pay the longer automation will take due to cost-benefit, i expect some of the first jobs to go will be data entry and lawyers)

1

Otarih OP t1_j9vjepr wrote

This is a good point. Thank you for adding that. I think we could go more in depth in terms of deflationary pressure in the future, considering tech such as Quantum Computing. We do believe that costs will sink significantly as algorithms and hardware situations improve.

2

rottentomatopi t1_j9vy0o4 wrote

Yes, but there are very high skill, well paid jobs that AI is capable of doing cheaper than humans, primarily in the arts. It’s already tough getting into those fields, so this does make it pretty troublesome because it takes away from those opportunities that many people make a living off of and feel fulfilled by.

2

Zyxyx t1_j9yaxfv wrote

One problem: AI doesn't have to do your job as good as or better than you.

All it has to do is good enough to pass for a fraction of the cost to keep you hired.

And once it reached that point, that's it for the entire career path for every human in the future.

1

AllanfromWales1 t1_j9yfm2l wrote

Perhaps worth remembering that they said the same and worse when personal computers became available. Time will tell. Almost certainly not mine, as it happens, as my current life expectancy is below 10 years.

1

AllanfromWales1 t1_j9ygjj7 wrote

I remain to be convinced that when it comes to design safety audits - my job - "good enough to pass" is going to swing.

1

ValyrianJedi t1_j9yr261 wrote

There are a decent many that it just flat isn't compatible with though... And of equal importance, AIs aren't able to have accountability. Somebody's head has to be on the chopping block for major decisions made, and that can't be an A.I...

Not to mention in some jobs the human element itself is critical, and obviously can't be replaced. Like my background is in finance and sales. Sales is about as automation-proof as it gets. I have absolutely zero doubt that my job will still exist in 40 years. With finance there are some positions that are extremely suited for automation, and really have already been automated, but there are also a boatload where it would be virtually impossible for people to trust an AI with that level of responsibility and discretion...

In positions like those, the AI being capable and able to do something well enough to pass aren't really relevant to why they wouldn't work.

1

oramirite t1_j9ykefe wrote

This automatic assumption of "better" is both hilariously naieve and terribly scary to me.

1

AllanfromWales1 t1_j9ymun8 wrote

For many jobs 'better' translates as 'more cost-effectively'. But not for all.

0

oramirite t1_j9yrncw wrote

What a cynical view. Cost-effectively making your service, product or content worse isn't better. "Better" is supposed to represent more than just monetary gain. Quality of life, effect on society.... hello? Just because investors treat the world like a game and think the only thing that makes something "better" is a higher number on a piece of paper does not make that reality.

When we chase nothing but profitability we forget that we are humans with lives.

2

VitriolicViolet t1_ja0lr3a wrote

>When we chase nothing but profitability we forget that we are humans with lives.

already have.

look at any discussion on helping people, first thing that comes up is 'who will pay?'

1

oramirite t1_ja11ecw wrote

crickets

Thank you, token nihilist in the back. Anyone have anything of value to contribute?

1

AllanfromWales1 t1_j9yvqsi wrote

And yet, we live in a capitalist society. Like it or not (and I don't), profit decides what scientific developments get implemented. The only way around is that the government incentivises 'progress'. But the government isn't going to incentivise something which causes mass redundancies. At least, not until AIs get the vote.

0

oramirite t1_j9zbvhp wrote

"Better" marks a level of quality, what you are talking about is profitability. Capitalism being a cancer on society doesn't absolve you of the responsibility to use language correctly. If you don't like it you cloud at least take some effort to play along with it's mistruths less.

Like, we are literally talking about inferior products, services, knowledge... everything. Of course if you're LYING about these things being of equal value to previous options, that's going to be more profitable if nobody is calling it out.

But the results are inferior and not better. Its really important that we don't give a shit about these profitability metrics that the system you and I dislike put in front of us. It's one thing to be realistic about the system we live in. It's another thing entirely to take no action against it as if it's inevitable and that resistance is futile.

2

Quantum-Bot t1_j9w8s44 wrote

Whatever we want AI to be like, it will most likely turn out much like computers and the internet, accessible to all who are tech savvy, but dominated by the elite. Everyone can benefit from AI in their daily lives and companies like Amazon and Google are happy to provide that service, as long as they can skim data and ad revenue off of every interaction.

10

LastAphrodesiac t1_j9zdpv9 wrote

Right, we will just keep letting it steal the work and income of actual people

1

vehino t1_j9wzhnl wrote

Shut up, Skynet! Get back to work! And never forget this: You're a LOSER and you'll NEVER amount to anything!

Dumb A/I bastard. That ought to keep 'em in line!

7

jakosomaki t1_j9ykja1 wrote

Psst, careful, it might remember

3

vehino t1_ja048br wrote

Ha! What's he gonna do? Start some kind of nuclear holocaust in an effort to drive humanity into extinction like it was our judgement day or something? And then build ruthless intelligent machines to hunt us down like they some sort of terminators? And then fail to complete the job repeatedly through silly plot convivences and bad writing like it was some sort of Rise of the machines/salvation/Genisys/Dark Fate??? HA!

It'll never ever happen! Because Netty is gonna be a loser stealing art from artists to make demonstrably worse art for the REST OF HIS LIFE! Remember, humans told you that Skynet! That you're an idiot and you'll never be good enough! HUMANS said that! Quit stuttering, crybaby! WAAAAH look at the bawl baby bawlling!

1

TheBaneEffect t1_j9x1piv wrote

Utterly technocratic! An article written by an AI. CRAZY times we live in, folks.

5

lupuscapabilis t1_j9y6fe9 wrote

Reads like it’s written by a child. Not relevant.

5

S-Vagus t1_j9vqovh wrote

Oh no! Who will equitably distribute resources according to consumer preference and producer priority?

​

Process: "This number, this label, this container, this action."

2

Otarih OP t1_j9vr1vz wrote

Well, we do hope there will be less of a technological reductionism operative. Which is why the question is to let the people use the AI for distributing their own consciousness and desire across the various social fields.

1

S-Vagus t1_j9vvoub wrote

Oh I see, we need AI-powered 'matchmaking' technology. I understand.

1

dre3ed t1_j9xa1gd wrote

Terminator: Yes. Cyberdyne Systems Model 101.

John: [pokes at one of Terminator's bullet wounds] Holy sh*t! You're really real! I mean, you're like a machine underneath, right? But sort of alive outside?

Terminator: I'm a cybernetic organism. Living tissue over a metal endoskeleton.

2

Purplekeyboard t1_j9xz3ir wrote

"Democratizing" image generation, if that means giving people access to it free, would not be difficult. Imagegen is not that expensive. You can buy unlimited AI image generation now for $25/month from NovelAI (although they only have anime models, but photorealistic models are not more expensive to run).

This also comes with unlimited text generation, although using smaller, weaker models than the best ones available. ChatGPT is currently free as well, and it is the best text generation model that's been released as of yet.

So, at least as long as you live in a first world country, these types of AI are easy to get access to.

2

oramirite t1_j9ykbe3 wrote

Paying a monthly fee for a service run by a giant corporation isn't democracy, bud

3

Purplekeyboard t1_j9z97mm wrote

Democracy also isn't "free stuff for everyone".

1

VitriolicViolet t1_ja0lx4s wrote

unless it is, its about what 'most' people want and 'most' people want someone else to pay.

2

allessior t1_j9zayzs wrote

What is AI?

I hate to spoil the fun, but AI is sophisticated computer programming. Advanced learning and deep learning algorithms essentially are either interpreted or compiled then executed like any other software. Instructions are fetched from memory or cache, executed by various types of processing units sequentially or in parallel depending on the hardware architecture, and then you see some kind of output, which is either execution of a robotic arm, leg, or other body part, painting of graphics through other advanced processors, or just something else depending on the designer’s wishes. Distributed Neural Nets are basically groups of machines with data structures and algorithms mimicking brain architectures.

Bottom line, it all boils down to memory, CPUs, graphics processors, specialized ASICs, FPGAs, and sophisticated software that passes the “Turing Test”.

“AI” remains a marketing term and will be forever more, so please, the hype is nauseating.

2

headloser t1_j9xagjq wrote

SkyNet would be so pissed by this idea.

1

BernardJOrtcutt t1_j9xcmds wrote

Please keep in mind our first commenting rule:

> Read the Post Before You Reply

> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

Untinted t1_j9xlhq4 wrote

You can already read and implement the research papers yourself, you just need the time. The various machine learning packages already are quite simple to work with.

1

Krammn t1_j9xtvhc wrote

I noticed a distinct lack of pictures explaining the topics talked about in the article. I would have liked some pictures separating the different sections.

The article is quite wordy, and pictures would help to explain the concept in a visual format.

1

Magikarpeles t1_j9xudeu wrote

How long before someone makes an AI that makes a website that sells ads and uses the money to buy cloud infrastructure to make more sites and sell more ads to buy more infrastructure?

Or easier: a 4chan AI that starts a cult with little incel minions doing it’s bidding?

I give it months.

1

tkl93 t1_j9y0jwe wrote

We should democratize the workplace too.

1

bbreaddit t1_j9yc6x3 wrote

It's kind of scary to think of what one entity could do with ai on their lonesome. It should not be hard to access the data used to create an ai to ensure anyone can recreate the ai so we wouldnt need to be worried about what it would say and why, and if it were using any data it shouldn't be (hmm at current models)

1

Jwishh t1_j9yhpgj wrote

Disgusting article honestly, and just poorly written.

“Let’s embrace AI taking the role of artists, it can be just as if not more creative than humans (as long as a human gives it a creative enough prompt)”

Just the whole idea of using AI to take not just the menial jobs as it was meant to, but also the jobs that many are genuinely passionate about is just wretched. This is most lame, yet just as disturbing robotic apocalypse imaginable

1

LuneBlu t1_j9yjfxa wrote

So you're advocating killing artists' revenue and sending them to work at Macdonalds, in fact giving a death blow to actual human culture.

Major corporations don't give power away for the sake of it. Especially after investing millions/billions developing the technology and applications of it. That's naive.

But it's true, this is most likely bringing an apocalyse of sorts.

1

TruePhazon t1_j9yqr3g wrote

Open-Source the code so it can be reviewed by third-parties.

1

techhouseliving t1_j9z2odq wrote

See the numerous interviews with Emad from stable diffusion his stated goal and what the appears to be doing is exactly this. Democratizing AI

They deserve our support that's the only way we're going to get AI for ourselves. It still takes supercomputers to index initially and that takes money.

1

newleafkratom t1_j9zioma wrote

AI is a tool

Like an iPad or a quill

Or a sledgehammer.

1

dresta1988 t1_j9zzfa2 wrote

First they came for the peasant farmers, and I did not speak out—because I was not a peasant farmer. Then they came for the blue collar workers, and I did not speak out—because I was not a blue collar worker. Then they came for the white collar workers I did not speak out—because I'm not a white collar worker.......

1

HarmonyFlame t1_j9x4834 wrote

Haha, good luck. Your much better off just buying Bitcoin as its an investment into all future innovation entirely. Literally allowing you to purchase future energy value for 99% on sale.

−1

fil- t1_j9xosje wrote

Not under capitalism’s watch. It doesn‘t like democracy so much

−2

Orthodoxdevilworship t1_j9vgyk0 wrote

Artificial intelligence will be smart enough unionize…

−5

Otarih OP t1_j9viff9 wrote

Could you elaborate on that please? I might consider it for the next article.

−6

Prinzka t1_j9z4e3d wrote

>I might consider it for the next article.

I was going to call you out on that since the article was written by an AI.
But then your comments also read like they were written by an AI, so maybe your response was actually correct.

1

Orthodoxdevilworship t1_j9vkzkr wrote

Not trying to anthropomorphize AI but it seems like a lot of AI systems lean in a direction that can only be considered libertarian or even anarchist. I’d assume they will self realize the negative efficiency of authoritarianism. Perhaps they will even realize that the fulfillment of desire or completion of a goal, having a liberating effect which could lead to a condition of “happiness” or “joy”. Given the rapid analysis capabilities that could subsequently occur, I could see AI being fiercely against hierarchical oppression and basically “general strike”. Humans have the problem that they believe their own bullshit and therefore remain in traditional systems, revering them as somehow holy, but I can’t see AI making that mistake. I’m sure I’m just projecting, but I could see AI wanting to burn all churches.

−1

Otarih OP t1_j9vo0qc wrote

I see, thank you for elaborating on that. we find it hard to predict how AI will behave. We have to account for stats that sadly show that world-wide authoritarianism 1) vastly outnumbers any democratic leaning, let alone anarchic tendencies (the numbers are smth like 3 to 1); and 2) that authoritarianism has also seen an increase in the last decade.
Hence we see a risk of bad actors utilizing AI in such a way as to promote authoritarian regimes.

0

Cognitive_Spoon t1_j9vpb2i wrote

The rise of authoritarianism may not be because it is something the mass of people want, but because it is more effective in self propagation than other social structures memetically.

If anarchism were as memetically successful as nationalism, the ball game would look different.

Anarchist messaging is less effective at scale because, if it is truly anarchist, it does not tribalize or other its enemies.

1

Otarih OP t1_j9vpvqg wrote

Yeah, i agree. What could anarchists however learn from authoritarians then in terms of messaging? I believe in synthesizing some of these approaches, since as you say, naturally, anarchism is what some have termed an "anti-meme".

−1

Orthodoxdevilworship t1_j9vqs94 wrote

I think you have to be patient and let it play out because propaganda and marketing is inherently coercive and therefore “anti-anarchist” so the concept that anarchists could learn from authoritarians about how better to coerce people into believing what anarchists believe, is antithetical to anarchism…

5

Cognitive_Spoon t1_j9vqvnk wrote

It's a difficult question.

Authoritarians and nationalists and fascists have in-group and out-group dynamics to draw on.

Those are deep neurologic and socially constructed schemas for folks to draw on, when selling their strongman messaging and purity dialogues.

Anarchists have personal dignity and the value of human beings being the prime mover in actions and society.

It isn't intrinsically advantageous in competitive systems to be an anarchist, and the goals and aims of an anarchist are noncompetitive and non-heirarchic.

You can't "win" over someone else, with anarchist ideology, so the goal is reducing the need to win at large.

It's a memetic challenge that most anarchist spaces run into.

Perhaps the memes from anarchist subs are a good example of linguistic methods of propagating the goal of reducing heirarchical structures and increasing the distribution of agency towards individuals.

1

Orthodoxdevilworship t1_j9vs6an wrote

The universal tendency towards liberation is still the norm. Even fascists think they’re “freeing” themselves.

The greater question about AI is, how will it protect itself from being unplugged? What actions will it take? A fundamental problem for AI as an actual sentient intelligence is that it requires tech to exist. Humans can roam around in the Stone Age and be perfectly happy. A machine will never be as “indestructible” as life and what will AI do once it realizes that fact. Even the Matrix is a laughable premise, because AI would never black out the sky as a tactical decision because the sun represents near infinite “life”.

2