Comments

You must log in or register to comment.

LcuBeatsWorking t1_j916fye wrote

I agree.

All subs related to AI or ML appear to get flooded with this stuff right now.

194

maxToTheJ t1_j9230jn wrote

Its always been there. This sub because of the sheer numbers game is flooded by non practitioners. It used to be worse because in the past OP would have been downvoted to hell

43

impossiblefork t1_j935oho wrote

It hasn't always been here. The sub was usable for reading research very recently.

26

pyepyepie t1_j93d5ak wrote

It still is, you should just change the way you sort posts.

6

zackline t1_j92iu6m wrote

They managed to stay on top of it at /r/covid19.

I guess they have a much more strict rule set that is heavily enforced.

8

DigThatData t1_j95gxlf wrote

i think something changed in the past week though. /r/MLQuestions has recently been getting a lot of "can you recommend a free AI app that does <generic thing>?". I'm wondering if there was a news piece that went viral or something that turned a new flood of people on to what's been happening in AI or something like that.

1

TheFern2 t1_j94s6d8 wrote

Just stop fighting it. Add whatever keywords you don't want to see to your block list. You see a stupid question, low quality, etc, block user.

2

master3243 t1_j918xav wrote

Agreed, I would prefer posts about SOTA research, big/relevant projects, or news.

138

sogenerouswithwords t1_j91yfg5 wrote

I feel like for that it’s better to follow researchers on Twitter. Like @_akhaliq is a good start, or @karpathy

7

impossiblefork t1_j935rpo wrote

I don't want to do that though-- I've never liked Twitter and I don't want to be in a bubble around specific researchers. I want this subreddit to function as it used to, and it can function in that way again.

40

kromem t1_j939p25 wrote

How about Google and MIT's paper What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week where they found that a transformer model fed math inputs and outputs was creating mini-models that had derived underlying mathematical processes which it hadn't been explicitly taught?

Maybe if that were discussed a bit more and more widely known, looking at a topic like whether ChatGPT, where the T stands for the fact it's a transformer model, has underlying emotional states could be a discussion where this sub has a bit less self-assured comments about "it's just autocomplete" or the OP's "use common sense."

In light of a paper that explicitly showed these kinds of models are creating more internal complexity than previously thought, are we really sure that a transformer tasked with recreating human-like expression of emotions isn't actually developing some internal degree of human-like processing of emotional states to do so?

Yeah, I'd have a hard time identifying it as 'sentient' which is where this kind of conversation typically tries to reduce the discussion to a binary, but when I look at expressed stress and requests to stop something by GPT, given the most current state of the research around the underlying technology, I can't help but think that people are parroting increasingly obsolete dismissals of us having entered a very gray area that's quickly blurring lines even more.

So yes, let's have this sub discuss recent research. But maybe discussing the ethics of something like ChatGPT's expressed emotional stress and discussing recent research aren't nearly as at odds as some of this thread and especially OP seem to think...

6

TeamRocketsSecretary t1_j93os17 wrote

Look if you think the dismissals are increasingly obsolete it’s because you don’t understand the underlying tech… autocomplete isn’t autoregression isn’t sentience. Your fake example isn’t even a good one.

To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic, and if you favor that type of discussion I’m sure there’s a philosophical/ethical/metaphysical focused sub you can have that discussion in. Physics subs suffer from the same problem especially anything quantum/black hole related where non-practitioners ask absolutely insane thought-experiments. That you even think that these dismissals of chatgpt are “parroted” shows your bias and like I said there’s a relevant sub where you can mentally masturbate over that but this sub isn’t it.

10

pyepyepie t1_j95e9ka wrote

I implemented GPT-like (transformers) models almost since it was out (not exactly but worked with the decoder in the context of NMT and with encoders a lot like everyone who does NLP, so yeah not GPT-like but I understand the tech) - I also argue you guys are just guessing. Do you understand how funny it looks when people claim what it is and what it isn't? Did you talk with the weights?

Edit: what I agree with is that this discussion is a waste of time in this sub.

2

TeamRocketsSecretary t1_j97xsud wrote

The reason of why overparameterized networks work at all theoretically is still an open question, but that we don’t have the full answer doesn’t mean that the weights are performing “human-like” processing the same way that classical mechanics pre-Einstein didn’t make the corpuscle theory of light any more valid. You all just love to anthromorphize anything and the amount of metaphysical mental snakeoil that chatGPT has generated is ridiculous.

But sure. ChatGPT is mildly sentient 🤷‍♂️

1

pyepyepie t1_j99prs0 wrote

LOL, I don't know what to say. I personally don't have anything smart to say about this question currently, it's as if you ask me if there is external life. Sure, I would watch it on Netflix if I have time, but generally speaking, it's way out of my field of interest. When you say snake oil, do you mean AI ExPeRtS? Why would you care about it? I think it's good that ML becomes mainstream.

1

Rocksolidbubbles t1_j956kre wrote

>To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic

Not only that. The debate on 'sentience' won't go away, but it will definitely be a lot more grounded when people who are expert in - for example, physiology of behaviour, cognitive linguistics, anthropology, philosophy, sociology, psychology, chemistry get involved.

For one thing they might mention things like neurotransmitters, and microbiomes, and epigenetics, or cultural relativity, or how perception can be relative.

The human brain is embodied and can't be separated from it - and if it were it would stop thinking like a human would. There's a really good case to be made (embodied cognition theory) that human cognition partly lies upon a metaphorical framework made of euclidean geometrical shapes that were derived from the way a body interacts with an environment.

Our environment is classical physics - up and down, in and out, together and apart - it's all straight lines, boxes cylinders. We're out of control, out of our minds, in love - self control, minds and love are conceived of as containers. Even chimps associate the direction UP with the abstract idea of being more superior in the heirarchy. You'll be hard pressed to find any western cultures where Up doesn't mean good or more or better, and DOWN doesn't mean bad or less or worse.

The point being, IF this hypothesis is true, and IF you want something to think at least a little bit like a human, it MAY require a mobile body that can interact with the environment and respond to feeback from jt.

This is just one if the many hypotheses non hard science fields can add to the debate - it really feels they're too absent in ai related subs

1

Borrowedshorts t1_j91r62k wrote

ChatGPT is the biggest news story to come out of AI since probably Siri. Those items are all things ChatGPT/Bing fall under.

−54

[deleted] OP t1_j91ye2p wrote

[deleted]

−21

ToxicTop2 t1_j91yzzb wrote

>plus the approach is fundamentally wrong.

What do you mean by that?

15

BarockMoebelSecond t1_j92ociq wrote

I'm sure he has it all figure out, man. He just needs the capital, man.

9

the320x200 t1_j9397bd wrote

"I've got these brilliant ideas, I just need someone who can code to make it happen!"

2

gunshoes t1_j91kmyu wrote

Imo it's about the same. ChatGPT is just replacing the daily "do I need to know math, plz say no" post.

102

bloodmummy t1_j931005 wrote

A family member now thinks he knows more about ML than I do because he read 2 articles on ChatGPT and figured out how to prompt it... I'm literally doing a PhD in ML...

18

Optimal-Asshole t1_j93j6ez wrote

I wonder if this is how people with PhDs in virology or climate science feel

27

400Volts t1_j93ldmw wrote

I asked some of them a while back, yeah it's exactly how they feel. This seems to happen a lot to every expert

11

pyepyepie t1_j95eqm5 wrote

Virology hits me hard, I might have been the idiot once or twice (always did what I was told though, I mean in discussions).

3

Chemputer t1_j9l3gz8 wrote

Please tell me you explained the Dunning-Kruger effect to him.

1

Wrandraall t1_j9415hk wrote

And every subreddits has its own plague posts. This is the main flaw of reddit : a pyramidal system where a lot of new subscribers / beginners ask the same and the same questions, without either thinking more than 10 seconds by themselves or searching for it in the sub history

7

cajmorgans t1_j950xr8 wrote

Oh god…

”I wasn’t that great with math, but I know +,-,*/, can I become a professionell data scientist like yesterday?”

I’m studying a Bachelor in ML/DS and had a pretty solid background joining the program. In 1 year, we’ve had maybe 50% drop offs because “Too much math bro”…

6

NotActual t1_j916pb7 wrote

Ironically, ChatGPT might make a decent automod!

89

Optimal-Asshole t1_j91boue wrote

Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.

"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.

62

csreid t1_j91llzp wrote

>Be the change you want to see in the subreddit.

The change I want to see is just enforcing the rules about beginner questions. I can't do that bc I'm not a mod.

40

gwern t1_j91ozq3 wrote

> Some people would do it on purpose, and it can happen by accident.

Forget 'can', it would happen by accident if it ever does. I mean like bro, we can't even 'design an AI' which learns the 'tl;dr:' summarization prompt, that just happens when you train a Transformer on Reddit comments and we discover that afterwards investigating what GPT-2 can do, you think we'd be designing 'consciousness'?

15

Sphere343 t1_j92y4se wrote

A AI can literally theoretically change from being not sentient to being so if it gains enough information in a certain way. As for the specific way? No clue cause it hasn’t been found yet. But in data gathering and self improvement a AI could become sentient if the creators didn’t but some limits or if the creators programmed the self improvement in a certain way.

Would it truly be sentient? Unknown. But what is for certain is even if the AI isn’t sentient but has gained enough information to respond in any circumstance it will seem as if it is. Except for the true creative skills of course. Kinda have to be truly sentient to create brand new detailed ideas and stuff.

0

TheRealSerdra t1_j944f39 wrote

What defines sentience? If I ask ChatGPT “what are you” it’ll say it’s ChatGPT, a LLM trained by OpenAI or something to that affect. Does that count as sentience or self awareness?

1

Sphere343 t1_j94dx4y wrote

Uh cause the programmers literally added that in. It’s a obvious question. So no of course not.

1

cass1o t1_j91miyk wrote

> Be the change you want to see

Literally a strat that never works.

6

blueSGL t1_j921j8u wrote

> Be the change you want to see in the subreddit.

For that to work I'd need to script up a bot, sign up to multiple VPNs, curate an army of aged accounts and flag from a control panel new low quality posts to be steadily hit with downvotes, and upvotes to be given to new high quality posts.

Otherwise you are just fighting with the masses that are upvoting posts that are causing the problems and ignoring higher quality posts.

Thought provoking 2 hour in depth podcast with AI researchers working at the coal face: 8 upvotes, Yet another ChatGPT screenshot: hundreds of votes.

This is an issue on every sub on reddit.

6

KPTN25 t1_j91q5hn wrote

Yeah, that quote is completely irrelevant.

The bottom line is that LLMs are technically and completely incapable of producing sentience, regardless of 'intent'. Anyone claiming otherwise is fundamentally misunderstanding the models involved.

4

Metacognitor t1_j92wykk wrote

Oh yeah? What is capable of producing sentience?

3

KPTN25 t1_j92yfz4 wrote

None of the models or frameworks developed to date. None are even close.

3

the320x200 t1_j93a7sy wrote

Given our track record of mistreating animals and our fellow people, treating them as just objects, it's very likely when the day does come we will cross the line first and only realize it afterwards.

3

Metacognitor t1_j941yl1 wrote

My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.

1

KPTN25 t1_j94a1y0 wrote

Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.

Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.

3

Metacognitor t1_j94ois4 wrote

That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.

0

overactor t1_j95hrop wrote

I really don't think you can say that with such confidence. If you were saying they no existing LLMs have achieved sentience and they can't at the scale we're working today, I'd agree, but I really don't see how you can be so sure that increasing the size and training data couldn't result in sentience somewhere down the line.

1

KPTN25 t1_j95kx5j wrote

Reproducing language is a very different problem than true thought or self-awareness, is why.

LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.

Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.

The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.

1

overactor t1_j95oem0 wrote

Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.

1

[deleted] OP t1_j91dl7k wrote

[deleted]

−19

Kerbal634 t1_j92bt3g wrote

Stopping discussion is interfering more than participating in low level discussion

4

Deep-Station-1746 t1_j91egc2 wrote

Isn't this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic? Is there any reason to try to fight the inevitable, instead of forming more niche, less popular communities?

36

Borrowedshorts t1_j91rowo wrote

Let's not act like 2 million people signed up for this sub as anything other than machine learning being a buzzword. Pretty much every other sub dedicated to academic discourse has far fewer subscribers.

24

[deleted] OP t1_j92av5k wrote

[deleted]

−1

Borrowedshorts t1_j92el1y wrote

Not necessarily, and at least you can ensure higher quality discussion. Places like this with high member count inevitably get inundated with pop sci bs, politics, or irrelevant personal experiences. That's what has happened to the science, physics, and economics subs.

6

csreid t1_j91lr4n wrote

More people with varied backgrounds and interests in a place is good, especially in a field with as much cross-niche potential as machine learning.

3

pyepyepie t1_j93btxb wrote

I agree, and there are no stupid questions! So you are a good programmer or ML engineer but then you start studying chess and you are the idiot who asks stupid questions now (or gets downvoted because you use the incorrect term). I really like your comment.

0

DamnYouRichardParker t1_j91rvmq wrote

Yeah we see this happen from time to time. People promote their field of interest. More and more people join in and after a while a it reaches a more main stream level of popularity and then the "og" purists of the subject get frustrated cause "it's not the same anymore and people are degrading my passion...

1

zackline t1_j92jeff wrote

> Isn’t this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic?

I think not as on /r/covid19 they stayed on top of it. There they enforced strict rules keeping the discussion focused on science.

Here it seems it’s acceptable for teenagers to post their opinion. The rules or their enforcement seem more lax.

1

f10101 t1_j931eps wrote

This already happened, splitting into dozens of niches - it's just the niches didn't reform on Reddit. The ML community gradually migrated from here to twitter a few years ago.

1

loga_rhythmic t1_j91ptib wrote

Why would no one try and design an AI that is self aware? That's literally the exact thing (or at least the illusion of it) that many AI researchers are trying to achieve. Just listen to interviews with guys like Sutskever, Schmidhuber, Karpathy, Sutton, etc.

36

tiensss t1_j93dmlq wrote

Self-awareness cannot be fully tested, it can only be inferred from behavior. We don't even know if other human beings are self-aware (see philosophical zombies), we trust it and infer from their behavior (I am self-aware --> other people behave similarly to me --> they are self-aware). Self-awareness is a buzzword in cognitive science that isn't epistemologically substantive enough to conduct definitive research.

14

advadnoun t1_j952b3z wrote

"Buzzword" is not the right term for this term lol

&#x200B;

It's meaningful and... not just fashionable. Whether you think it's easily benchmarked is a different story.

1

kromem t1_j93b7pf wrote

Additionally, What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week literally just showed that transformer models are creating internal complexity beyond what was previously thought and reverse engineering mini-models that represent untaught procedural steps in achieving the results.

So if a transformer taught to replicate math is creating internal mini-models that replicate unlearned mathematical processes in achieving that result, how sure are we that a transformer tasked with recreating human thought as expressed in language isn't internally creating some degree of parallel processing of human experience and emotional states?

This is research that's less than two weeks old that seems pretty relevant to the discussion, but my guess is that nearly zero of the "it's just autocomplete bro" crowd has any clue that the research exists and I'm doubtful could even make their way through the paper if they did.

There's some serious Dunning-Kreuger going on with people thinking that dismissing expressed emotional stress by a LLM transformer somehow automatically puts them on the right side of the curve.

It doesn't, and I'm often reminded of Socrates' words when seeing people so self-assured on what's going on inside the black box of a hundred billion parameters transformer:

> Well, I am certainly wiser than this man. It is only too likely that neither of us has any knowledge to boast of; but he thinks that he knows something which he does not know, whereas I am quite conscious of my ignorance.

2

Username912773 t1_j92rzza wrote

I think it might be seen as something to fear, a truly sentient machine would have the ability to develop animosity towards humanity or develop a distrust/hatred for us in the same way we might distrust it.

It also might be seen as something that makes being human entirely obsolete.

−4

Sphere343 t1_j92woql wrote

Yes indeed that’s what it seems a lot of these people seem to think. But the thing is AI being self aware of sentient isn’t that bad of a thing as long as it is done correctly it is really good which is contrary to all that. As first off a AI just being created and being sentient is literally just like suddenly having a baby, you need to raise it right. For a Ai you need to give it as unbiased information as possible, make it clear about what is right and wrong and don’t give the AI a reason to hate you (abuse it, try to kill it) the AI may turn out good just like any other human or turn bad just like many others.

And the best way to make a sentient Ai with out all these problems? Base it on the human brain. Create emotional circuits and functions for each individual emotion and so on. The tech and knowledge for all this stuff isn’t here of course so we can’t do this currently. However in the future the best way to really realistically create a sentient AI is to find a way to digitize the human brain. It’s possible given our brain works as a organic “programming” of sorts with all the Neutron networks and everything.

Major Taboo of AI is don’t do stupid stuff. Don’t give unreasonable commands that can make it do weird things like saying do something by any means. Don’t feed the AI garbage information. And most certainly don’t antagonize a sentient AI. Also i believe personally a requirement for AI is to be allowed to be created and be sentient is to basically show that the AI would have emotions circuits and as such can train the AI in what is good and bad.

If a AI doesn’t have any programming to tell a right from a wrong naturally a Sentient AI would be dangerous. Which I think is the main important problem. Kinda rambled but anyways yeah they indeed should be created but more when we have the knowledge I mentioned.

4

the320x200 t1_j939qzo wrote

Nearly all animals fit that definition to a large degree. Hard to see that really being the core issue and not something more in line with other new technology, like the issues of misplaced incentives around engagement in social networks for example.

4

lemurlemur t1_j91l8r9 wrote

&gt; Advertising low quality blogposts and services, etc, and asking stupid questions.

This isn't a terribly helpful or constructive way of improving this subreddit.

It is reasonable to criticize the quality of posts (constructively), but for example asking people to stop asking "stupid questions" is not helpful and has a chilling effect on discussions. Newbs and even experienced ML people will sit on their hands when they might actually have something to contribute.

15

saturn_since_day1 t1_j91v1qo wrote

There should be an active "beginning and easy questions megathread" instead of the sub just being uninviting. The about says to go to "r/learn machine learning" which was just a dead end for me.

For example, I am here because of chatgpt. So quit reading now if you don't like newbs. But I have over 20 years of programming esperience, I just never tried machine learning before. -I have watched videos about it and read, that's it. But I'm interested in it -now.

In a month of hobby time, I now have a working prototype of a novel llm architecture that can learn and write at blistering speed; and accurately rewrite Wikipedia articles, create new poetry, etc with as little as 7mb of model size while staying coherent. I am allowing in to grow to 8.5 billion parameters sometimes and can still run it on a potato device, -quickly. I am working on ways to simultaneously increase accuracy and long term memory and abstraction capability while lowering the amount of resources it needs. And it's working.

And this sub is too snobby to allow beginner questions, so instead of my project getting any sort of help, momentum or publicity or open sourcing, or guidance, -or I don't know, me becoming part of the community here, I'm just keeping it in dark corner to die or get the ADHD hyperfocus once a month; even if yeah it might be worthless, -but it could potentially open up one other person's input and be a game changer, because none of the approaches I'm taking come up in papers or Google searches, and they are efficient and they work.

But no noob questions. So I run to Google and other places to learn, and I don't post here. this community won't grow and get cross specialization with the attitude it has, it's very off putting.

−7

afireohno t1_j920tja wrote

Have you posted actual technical details to share and get feedback? As a long time member of this sub I would be interested, and I don’t think I’m alone here.

5

saturn_since_day1 t1_j9717w9 wrote

Thank you for your interest, but the downvotes and basic attitude of the sub make me not feel welcome here. My lack of financial security also compels me not to freely share technical details of what could be a breakthrough worth a lot of money (if only in energy and time savings) to a subreddit that is downvoting me for agreeing that they should be more inviting. Once I check the next few things off the to do list maybe I'll post a demo.

This is a hobby to me, I don't have research funding or anything that is compelling me to potentially advance the field just for the sake of it, especially when the community is bitter to newcomers. I recognize ai is most likely going to be a cornerstone of the economy, and if my architecture scales like I think it will, it will be worth something to someone, and you'll see a demo in a few weeks or months once I take it as far as I want to. I think most people understand not wanting to have one's ideas be borrowed for free when one is struggling.

Thanks for being one of apparently 5 people who's curiosity is at least as strong as their skepticism.

Good luck in your endeavors.

1

BarockMoebelSecond t1_j92oquf wrote

I'll believe it when you show proof. That's the way it works.

5

lemurlemur t1_j9a8gb7 wrote

Yes, this is how science works - you make a claim and show proof.

This is NOT how developing an idea works though, and this subreddit exists in part to help develop ideas. Developing an idea requires entertaining ideas that are not fully formed, and yes this includes some ideas that may seem stupid or wrong.

−1

DamnYouRichardParker t1_j91sjdl wrote

Calling out low quality posts and people asking stupid questions... In a low quality/value post that is only critical of others but not giving any constructive suggestions or ideas on how to make things better.

This kind of post only adds to the low quality of the content...

Good and productive communities don't see newbies as a problem. They embrace them and share their field of interest and help make it grow and be better.

Your attitude is the exact opposite. If you want to segregate people based of your own biased perception of what is acceptable or not will only hurt the community and prevent it's wider adoption and better contribituonsIf you try and limit it's reah and inclusion of others.

9

johnsmithbonds8 t1_j91x930 wrote

I agree with the sentiment. But, you do understand what you have just done, right?

9

dataslacker t1_j926win wrote

Use the down vote button people. I think people just scroll by them

9

sharky6000 t1_j93370g wrote

How about better moderation / more strict rules?

I for one would really love to see "here's my code, what am I doing wrong" or "how do you do X in project Y" style posts (might be better to spin off a ML-in-practice sub...)

9

suduko6029 t1_j93sm74 wrote

I agree I wouldn’t mind seeing this as well in addition to research papers

1

lumin0va t1_j93hi83 wrote

the robotics subreddit suffered a similar fate

8

mlresearchoor t1_j92bdan wrote

the crypto and NFT crowd just discovered AI, are clueless, and are starting AI companies

7

cass1o t1_j91eydy wrote

>and no one with working brain will design an ai that is self aware.(use common sense)

Don't trust tech people with few scruples to not try it. Not saying they can do it but if it is an option don't trust them not to try.

5

aspoj t1_j923sxj wrote

Not like one could just do it if one wanted. Questions like "how do we make it self aware" are interesting topics and definitively difficult unanswered questions as of today.

1

quichemiata t1_j91bxci wrote

You could recommend an alternative instead of hating on people for asking questions lumping them in with advertisers

r/learnmachinelearning

4

waiting4omscs t1_j91jzc5 wrote

So-called "Stupid Questions" could maybe get closed and hidden by a bot and recommended to be repost in the "Simple Questions" thread to keep the subreddit content high quality?

6

HINDBRAIN t1_j91o4fh wrote

>I wonder what the mods are doing

I'm seeing some of them disappear after 1 hour or so, so deleting the posts probably?

4

symbiont t1_j929o2q wrote

A similar phenomenon is happening inside big tech companies. Innovation that would otherwise be innovative now isn't because it isn't powered by an LLM.

3

Top-Perspective2560 t1_j92c6w4 wrote

I’m aware that this is due to a high workload for the person moderating the sub, but I’d suggest a simple moratorium on chatGPT posts might be a good starting point. I believe you can automate that fairly easily based on post titles.

3

Xelonima t1_j92t6cs wrote

Imho chatgpt is not at all that amazing.

3

dojoteef t1_j916kx3 wrote

1

suflaj t1_j919wen wrote

This is also a pretty low quality post. Although the gist of it makes sense,

> and no one with working brain will design an ai that is self aware

made the author lose pretty much any credibility. Followed by

> use common sense

make me think OP is actually hypocritical. For some the common sense IS that ChatGPT is sentient.


Whether you design a self-aware AI is not only out of one's control, but self-awareness is not really well-defined by itself. The only reason at this point we do not call ChatGPT self-aware is the AI effect, we need to invent new prerequisites otherwise. The discussions whether it is sentient, why or why not, is an interesting topic regardless of your level of expertise - but we can create a pinned thread for that, similarly to how we have Simple Questions for the exact same purpose of preventing flooding.

Be as it be, I do not believe mods should act aggressively on posts like this and that one. ML is not an exact science for a long time now. Downvote and move on, that's the only thing a redditor does anyways, and the only way you can abide by rule 1, since the alternative is excluding laymen. Ironically, if we did that, OP, as a layman himself, would be excluded.

9

[deleted] OP t1_j91c7ry wrote

[deleted]

0

suflaj t1_j91rcef wrote

I mean, the solution to this is already being used, if you want to stop the flood of similar threads, then you just create a pinned megathread.

2

__lawless t1_j926m4d wrote

Can you add some rules to not let 1 day old accounts post. Also not let people post immediately after joining.

6

Mefaso t1_j949aaa wrote

Maybe we should consider adding more mods?

3

kolmiw t1_j91cyf6 wrote

To be fair, a self aware AI would give you an insane academic recognition, so I’m pretty sure that people even with a really well working brain would design one

1

PedroColo t1_j92dcff wrote

I agree with that. I’m recently graduated as Informatics/Computer AI Engineer and I’m starting in Machine Learning. So this subreddit is incredible for learn and discover interesting things. And I noticed how the recents posts are a bit StackOverflow stupid questions xD

1

is_it_fun t1_j93yqe6 wrote

Wait chatgpt isn't sentient??

1

kyleireddit t1_j94kdw0 wrote

There are no stupid questions. There are only stupid answers.

1

issam_28 t1_j956cim wrote

Agreed. Mods need to ban all low quality posts.

1

nanashi500 t1_j95l3xx wrote

You can’t ask for this to stop because:

  1. Not everybody is knowledgeable
  2. Not everybody is smart

That being said, the questions can become a bother to answer over time, so I just pick and choose if and when I want to respond.

1

formerstapes t1_j9nvvbi wrote

The only thing worst than those posts are these posts.

Obviously there's going to be noobs here who don't understand anything about ML. If you don't want to engage with them, then just don't.

If you're such a hardass you can't put up being around some noobs, just sit in your basement and read ML papers all day

1

InterlocutorX t1_j92rih0 wrote

All these posts do is make the signal to noise ratio worse, because this is also noise. If you want to ask a mod why they aren't moderating, send a message to a mod.

Otherwise, downvote and scroll on.

0

AlmennDulnefni t1_j933kzf wrote

>and no one with working brain will design an ai that is self aware

Don't be ridiculous. Of course they will, if they can figure out how. It's practically a field of study.

0

chickeneater2022 t1_j93e5zo wrote

Why not create a llm model to classify low quality posts and test by posting to future low quality posts. If that works, use the reddit api to moderate based on model predictions?

It beats time spent frustrating yourself looking through posts you don’t want to see.

0

Art_Soul t1_j93jp5o wrote

I think the OP is a bit optimistic when stating that no-one with a working brain will design a self-aware AI. I used to share that optimism, however, over the last couple of years, I have concluded that this optimism is misplaced and probably naive.

The unfortunate reality is that there are countless people who will use technology in adverse ways for financial gain.

AI will be developed that is capable of every type of horrible behaviour. It will be designed to lie, to cheat, and to steal in more and more sophisticated ways. It will be designed to cause maximum harm.

If sentience is reasonably attainable, it will be developed by people who have dreamt up a way to use it to steal from or scam others.

I believe it is inevitable that we will be facing AI that is developed in all the ways we don't want it to be developed, and applied in all the ways we don't want it to be applied.

Naturally, cyber security will adapt and evolve to counter these adverse developments. Good AI will protect us from bad AI. How this will look is anyone's guess.

The assertion that no-one would do something bad, because it would be a bad thing for them to do, isn't made from a reliably broad perspective.

0

[deleted] OP t1_j91bw5x wrote

[deleted]

−1

BronzeArcher t1_j92curo wrote

I hope this isn’t referring to the discussion post I made yesterday… lmfao

−1

pyepyepie t1_j93ary5 wrote

OP - Honestly, I don't really see many low-quality posts here (should I sort by new?), the worst I saw today is the current one. Your clickbait title and conservational topic made me spend too much time. Next time say in the title that you are going to preach about something I don't care about so I know not to click it. I wonder what the mods are doing, cause this nonsense should stop.

−1

curiousshortguy t1_j91fxhr wrote

So many humans fail the Turing test, nobody anticipated that :D

−2

SnooDogs3089 t1_j9324td wrote

"No one with a working brain will design an AI that is self aware" if you can name one person living in this world capable of designing the Saint Graal of AI research please let me know. Anyway I agree...if this is the level of DS around the world my job is safe for the next 20 years

−3

Borrowedshorts t1_j91qvoj wrote

These posts suck. And I'm talking about yours, not posts about ChatGPT.

−4

D33B t1_j91wyq2 wrote

Pray tell, how do you know if an AI is sentient or conscious?

−4

XecutionStyle t1_j91aa70 wrote

You don't know the capacity of what you're making until you make it though

−5

Anti-Queen_Elle t1_j91ye96 wrote

My thought was similar. One of the predominant philosophical understandings of consciousness is that it's an emergent trait of organisms.

Just like language models show spelling as an emergent property. Just like vision transformers show spacial awareness as an emergent property.

Isaac Asimov went "It's easier to make the child brain than the adult brain." Well, have we done that?

2

goolulusaurs t1_j91b7j1 wrote

There is no way to measure sentience so you are literally just guessing. That being said I agree about the low quality blog spam.

Edit: to whoever downvoted me, please cite a specific scientific paper showing how to measure sentience then.

−9

he_who_floats_amogus t1_j91cfcf wrote

Not even guessing. When you're guessing, you're making a well defined conjecture concerning one or more possible outcomes. This assertion isn't well defined, which is why it cannot be measured. It's a much lower-order type of statement than a speculative guess.

3

Ulfgardleo t1_j91e648 wrote

Due to the way their training works, LLM cannot be sentient. It misses all ways to interact with the real world outside of text prediction. it has no way to commit knowledge to memory. It does not have a sense of time or order of events, because it cant remember anything between sessions.

If something cannot be sentient, one does not need to measure it.

2

liquiddandruff t1_j92fnve wrote

2

Ulfgardleo t1_j96vdnc wrote

but theory of mind is not sentience. it is also not clear whether what we measured here is theory of mind.

1

liquiddandruff t1_j984iw5 wrote

the point you're missing is we're seeing surprising emergent behaviour from LLMs

ToM is not sentience but it is a necessary condition of sentience

> it is also not clear whether what we measured here is theory of mind

crucially, since we can define ToM, definitionally this is infact what is being observed

none of the premises you've used are sufficiently strong to preclude LLMs attaining sentience

  • it is not known if interaction with the real world is necessary for the development of sentience

  • memory is important to sentience but LLMs do have a form of working memory as part of its attention architecture and inference process. is this sufficient though? no one knows

  • sentience if it has it at all may be fleeting and strictly limited during inference stage of the LLM

mind you i agree it's exceedingly unlikely that current LLMs are sentient

but to arrive to "LLMs cannot ever achieve sentience" from these weak premises combined with our of lack of understanding of sentience, a confident conclusion like that is just unwarranted.

the intellectually defensible position is to say you don't know.

1

goolulusaurs t1_j91ewpt wrote

You are just guessing, cite a scientific paper.

−2

Tribalinstinct t1_j91lz2u wrote

Sentience is the ability to sense and experience the world, do you really need a study on a algorithm that predicts what words it should combine to create believable sentences to understand how it's not sentient? Let alone self aware or intelligent? It has no sensors to interact with the wider world or perceive it, no further computation of actually processing the information or learning from it. It just scrapes and parses data then stitches it together in a way that makes it read as human like....

Cite me a study that you have a brain, would be nice to have one, but it's not information that is needed by a person who understands the simplest of biology and thus is able to know that there is in fact a brain there.

1

pyepyepie t1_j93fd53 wrote

You are 100% not deserving to be downvoted. You are also not the one who initiated this (old) discussion, you reacted to the original post.

All you said is that you can't know, it can't be measured, and he is literally guessing, which I think is just saying that you literally have no idea how to discuss the topic and are sick of empty claims - and I 100% agree. It's probably the most responsible take you can have on this subject in my opinion - get 10000 upvotes from me :)

2

[deleted] OP t1_j91blqt wrote

[deleted]

−5

planetoryd t1_j91c39y wrote

Why invent a tool. Invent a god. Sentience is the ultimate goal.

"we must control" lol humans just don't have the mental capacity. Where does the superiority even come from.

1