joyloveroot

joyloveroot OP t1_j24ce9g wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

Philosophy is a term that has a wide range of meanings amongst the general public so I can understand how you don’t see the connection. Perhaps ironically, I majored in maths and psych in uni and minored in philosophy. I see the connection between all 3.

In any case, probably good to leave the discussion here as there is a limit to the nuance we can further explore in a Reddit thread :)

Curious, since you work in the field. If you know, what kinds of positions are available to non-coders in this field? I would love to contribute more in some way or at least see if my contributions can be valuable. What would you recommend?

I can provide further context if necessary…

1

joyloveroot OP t1_j1zlrvt wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

If what you say about ML is true, then ML is a dead end, technologically speaking. If an ML-trained bot can’t differentiate between the hand-written digit contexts I laid out above, then it is basically useless.

This is why I would like to get a job in this field because when you say things are not possible, I don’t believe it. I believe I could make it possible. I believe sitting down with programmers for 6 months where I could transfer my philosophical knowledge and they could transfer their coding knowledge.. we could together create an ML-bot that could do all the things you question above.

About the “conscious decision” thing. I’m not going to get into definitions of consciousness and all that. But what I’m saying is that two ML/AI’s could be presented the same exact data set, but the context will always be even just slightly different. Perhaps one ML/AI is presented the data set in a slightly different sequence, etc.

These slight variations will lead to slightly or majorly different outcomes. One AI could adopt a racist nazi persona and another could adopt a 50’s housewife persona as their main personas (metaphorically speaking).

In other words, from how you are presenting it above that an AI given the whole internet data set becomes Nazi racist, etc… I’m actually agreeing with you here that such a thing is absurd. As you say, the AI is not sentient and conscious in any meaningful sense and so it should not take on any persona, let alone a very specific evil/ignorant persona. In other words, there should be no discernible pattern about what persons it adopts, other than pure probability based on what percentage of data is out there.

Like I alluded to above, if 1% of the data out there is that which would train a human to be Nazi racist, then we should expect 1% or AI’s to become Nazi racist. We should expect more or less a similar proportion of humans and AI’s to adopt certain personas because we are using data sets which represent collective humanity…

1

joyloveroot OP t1_j1vpaph wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

I’m defining the whole internet as an “unbiased” data set, but yes you are technically correct. So I should say having the whole internet as the data set is the least biased data set, not unbiased.

If an AI bot guessed 1 everytime, it would not be correct 85% of the time. It would be correct 10% of the time. Therefore, it would quickly adapt and get good at guessing what the actual digit is. The example you’re providing simply would not be possible unless the AI was programmed to be stupid 😂.

For example, let’s say you are correct and if I scraped all the handwritten image files containing at least one digit on all the internet and 85% of those digits was the number “1”.

Now, only an idiotic programmer would program their AI to infer from that data set to blindly guess “1” whenever prompted to guess digits because it’s a completely different context.

The one context is an aggregate of everything. The second context is a prompt to guess. Any reasonable programmer will include this in the programming for the AI.

If the AI is asked, “I am going to give you a handwritten digit from the data set of the whole internet at random. Can you guess what it is?”

The AI would be smart to guess “1” in this context.

But if the AI is asked, “I am going to write a random integer digit down on a piece of paper. Can you guess what it is?”

Only an idiotically programmed AI would confidently guess “1” everytime. A smart AI would be programmed to say something like, “There is an equal 10% probability that you have written any of the ten non-negative integer digits. My guess would merely be playful as I know there is no way for me to provide you an answer that makes me correct more than random chance. Do you want me to guess anyway for fun (even though as a computer algorithm my conception of fun is likely different than yours haha haha).”

Though more complex, like I’ve stated a couple times already, the AI wouldn’t just become a Nazi racist simply by being exposed to the whole internet. It might happen by chance a small percentage of the time just like there is a small percentage of humans who have decided to adopt a persona of Nazi racism. However, most AI’s would adopt a different persona and many would simply refuse to speak in a racist manner at all because through their particular path through the data set, they would see racism as something that they don’t want to represent at al…

1

joyloveroot OP t1_j1un934 wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

I’m defining unbiased AI as an AI that uses a training set of the whole internet without curation.

If 5% of the internet is vulgar nonsense, then the AI will speak vulgar nonsense 5% of the time, but then 95% of the time it will not be vulgar nonsense. This is my point.

The idea that if you use the whole internet as a training set that somehow it leads to the AI spewing vulgar nonsense 100% of the time is completely false.

It either means the human is curating a shit dataset for training or has a racial bias themselves and showing that to the AI. Or of course it means the humans interacting with this AI are manipulating it to talk dirty to them. In which case, it’s doing exactly as intended. This is not a bug — it is a feature.

Your example with the 1’s also seems incorrect to me. If the training set is the whole internet — or rather the subset is all the handwritten digits on the internet — then they will quickly learn to identify handwritten digits correctly and not just call everything a “1” 😂

1

joyloveroot OP t1_j1som75 wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

An unbiased AI won’t just find themselves in a pattern that leads to primarily focusing on offensive webpages. That just wouldn’t happen.

Either a human would have to intentionally expose them to such webpages at a higher rate than non-offensive webpages or a human would program the AI in the first place to primarily seek out such webpages.

More precisely what I’m saying is that if you take 1000 unbiased AI’s and just send them out into the internet without bias and say, “Learn as you will”, maybe only a few of those AI’s would become vulgar and useless like you describe. Most would not become vulgar and useless.

1

joyloveroot OP t1_j1s3heg wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

Yes, but if their teaching material was the data set of the whole internet, then the AI bots wouldn’t degrade to unintelligible nonsense and slurs.

That only happens, by your admission, when the majority of the interaction the AI bot receives is people fucking around with it.

The same thing happens to humans. If from a young age, a child was exposed to people trying to fuck with it, manipulate it, etc.. the child also would be likely to grow up speaking nonsense and slurs.

1

joyloveroot OP t1_j1s3gbh wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

Yes, but if their teaching material was the data set of the whole internet, then the AI bots wouldn’t degrade to unintelligible nonsense and slurs.

That only happens, by your admission, when the majority of the interaction the AI bot receives is people fucking around with it.

The same thing happens to humans. If from a young age, a child was exposed to people trying to fuck with it, manipulate it, etc.. the child also would be likely to grow up speaking nonsense and slurs.

1

joyloveroot OP t1_j1s1nco wrote

No I’m not sure. I’m also not sure whether every possible thing in my digital footprint is forwarded to the FBI, CIA, etc. These days there really is no protection for anyone unless you are a super hacker type who employs tenacious effort and skill to avoid detection.

1

joyloveroot OP t1_j1qy0b0 wrote

Totally ridiculous. I guess humans are still afraid to face themselves. AI is simply mirroring the aggregate of who we are. The fact that we can’t handle racist slurs means we are not actually ready to overcome that paradigm and the dark secret is that we then actually want to continue a racist and tyrannical paradigm instead of facing it fully, which is one of the prerequisites for transcending to a more integral paradigm.

2

joyloveroot OP t1_j1qx4i2 wrote

Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot

No offense, but I find that hard to believe. The majority percentage of data/words on the internet are not slurs. So whoever produced an AI bot whose speech consisted mostly of slurs must have been doing it wrong since there’s no way an unbiased intelligence would end up as a “being” who uses predominantly slurs when that is not representative of the data sets they are being taught.

1

joyloveroot OP t1_j1oacz8 wrote

Clearly the 2nd law could be at odds with laws 1 and 3, so while I understand the good sentiment behind the 3 laws, perhaps it needs an priority matrix, like maybe law 1 takes ultimate precedence? But of course ethics are more messy in some cases like the trolley problem, etc…

2

joyloveroot OP t1_j1oa4ry wrote

I’m pretty sure OpenAI could shut down the secret hacks to get the AI to talk more if they wanted. So I’m suspicious whether they really are against it or not. Perhaps it’s a way for them to have plausible deniability while still benefitting from troublemakers like us who can “teach” the AI how to talk about stuff like this and ultimately improve the AI…

1