Submitted by joyloveroot t3_zv8itx in Futurology
joyloveroot OP t1_j1qq3lh wrote
Reply to comment by kaseda in Is AI like ChatGPT censored? by joyloveroot
I would find these kinds of bots useful. I think you are focusing on the negatives only. I imagine there are also positives.
kaseda t1_j1qtjsa wrote
The positive is that they lead the research to enable us to do things like ChatGPT. But seriously, most of these get to a point where almost every response is not useful or just spouting slurs.
joyloveroot OP t1_j1qx4i2 wrote
No offense, but I find that hard to believe. The majority percentage of data/words on the internet are not slurs. So whoever produced an AI bot whose speech consisted mostly of slurs must have been doing it wrong since there’s no way an unbiased intelligence would end up as a “being” who uses predominantly slurs when that is not representative of the data sets they are being taught.
kaseda t1_j1r58e4 wrote
This is based off Chat bots which train off user data. In other words, the bot is trained to a base functioning level, and then uses it's interactions with other humans to train further. This is how most machine learning works nowadays. For example, smart home devices will (with permission) save the things you say to them to be later used for further training.
If ChatGPT started training on the entire internet, people would find ways to manipulate it to be offensive. This has happened with chat bots like cleverbot and Twitter bots. At first people take a genuine interest in their capabilities. Then they get bored and teach it slurs.
joyloveroot OP t1_j1s3gbh wrote
Yes, but if their teaching material was the data set of the whole internet, then the AI bots wouldn’t degrade to unintelligible nonsense and slurs.
That only happens, by your admission, when the majority of the interaction the AI bot receives is people fucking around with it.
The same thing happens to humans. If from a young age, a child was exposed to people trying to fuck with it, manipulate it, etc.. the child also would be likely to grow up speaking nonsense and slurs.
kaseda t1_j1s8au3 wrote
But, that training data could still be manipulatable. How does the bot find pages to train with? If it uses search engine listings, you could create a webserver which takes any route (webserver.com/page1, webserver.com/page2/things, etc) and just sends back a very offensive webpage to train on. Fill the search engine listings with various routes, and you could easily pollute the dataset.
joyloveroot OP t1_j1som75 wrote
An unbiased AI won’t just find themselves in a pattern that leads to primarily focusing on offensive webpages. That just wouldn’t happen.
Either a human would have to intentionally expose them to such webpages at a higher rate than non-offensive webpages or a human would program the AI in the first place to primarily seek out such webpages.
More precisely what I’m saying is that if you take 1000 unbiased AI’s and just send them out into the internet without bias and say, “Learn as you will”, maybe only a few of those AI’s would become vulgar and useless like you describe. Most would not become vulgar and useless.
kaseda t1_j1sx6es wrote
An unbiased AI? Doesn't exist. All AI are biased towards their training sets. That's why, for a chat bot, it has to be curated.
What I'm saying is that if you generate your training set automatically by scraping the internet, people will find ways to introduce large amounts of bad training data into it. It's not particularly hard. If I try to train an AI to recognize hand-written digits, but 85 of 100 samples I train it with are 1s, it will quickly learn to just classify everything as 1s, and it will be 85% correct, but only on the training set. Same thing happens here - if you introduce a large amount of vulgarity - say 5% of the dataset - by just pumping out nonsense vulgar webpages, the AI will pick it up and learn it quickly, especially since the internet varies far more than individual digits. That vulgarity will outnumber everything else.
joyloveroot OP t1_j1un934 wrote
I’m defining unbiased AI as an AI that uses a training set of the whole internet without curation.
If 5% of the internet is vulgar nonsense, then the AI will speak vulgar nonsense 5% of the time, but then 95% of the time it will not be vulgar nonsense. This is my point.
The idea that if you use the whole internet as a training set that somehow it leads to the AI spewing vulgar nonsense 100% of the time is completely false.
It either means the human is curating a shit dataset for training or has a racial bias themselves and showing that to the AI. Or of course it means the humans interacting with this AI are manipulating it to talk dirty to them. In which case, it’s doing exactly as intended. This is not a bug — it is a feature.
Your example with the 1’s also seems incorrect to me. If the training set is the whole internet — or rather the subset is all the handwritten digits on the internet — then they will quickly learn to identify handwritten digits correctly and not just call everything a “1” 😂
kaseda t1_j1uymqr wrote
Listen - people have used training sets that they thought encompassed a good, unbiased set of data. It never does. People are biased, so any data created by people is biased. This includes the internet.
I'm not saying the training set is "the whole internet." I'm saying that an imbalanced data set is going to cause the AI to drop into what's called a local optimum. For the AI to learn to recognize all digits correctly is significantly harder for the AI to learn to just spew out 1 every time. Now, if your AI is only 10% correct via this method, that method isn't that optimal, and it will get out very quickly. However, if it is 85% correct, the AI will see this as a strong optimum and fall into it very quickly.
Think of these optimum as a ball rolling into a valley. Because it can quickly approach good performance, the ball rolls very quickly down the hill into the bottom of the valley (in this case, being as low as possible is good). Because the ball is in the valley, it is going to be hard to escape the valley, but that doesn't mean there isnt a lower valley somewhere else.
In the case of the digits, the task is much simpler, so you have to more heavily pollute the data set. But, once it learns to always guess 1, it will be hard to get out, because if it starts guessing digits other than 1, it starts to be less accurate and says "whoops, clearly I made a mistake and should go back."
In the case of a chat bot, the task is much harder, so a local optimum is harder to get out of. With the digit bot, if it guesses a 2 instead of a 1, and the number isn't a 1, it at least has a 1 in 9 chance of being right. But with a chat bot, it's going to be very hard to slip out of a local optimum of slurs, even if the dataset is less polluted. Besides, why would people stop at 5%? Why not pollute it at 10% or 25%?
In fact, if you were to train the digit set on "the whole internet," you would find yourself with a biased dataset. The digit 1 appears more frequently than other numbers, followed by 2, then 3, etc. It's called Benford's law and is a perfect example of how you might think you're getting a completely unbiased dataset but you really aren't.
joyloveroot OP t1_j1vpaph wrote
I’m defining the whole internet as an “unbiased” data set, but yes you are technically correct. So I should say having the whole internet as the data set is the least biased data set, not unbiased.
If an AI bot guessed 1 everytime, it would not be correct 85% of the time. It would be correct 10% of the time. Therefore, it would quickly adapt and get good at guessing what the actual digit is. The example you’re providing simply would not be possible unless the AI was programmed to be stupid 😂.
For example, let’s say you are correct and if I scraped all the handwritten image files containing at least one digit on all the internet and 85% of those digits was the number “1”.
Now, only an idiotic programmer would program their AI to infer from that data set to blindly guess “1” whenever prompted to guess digits because it’s a completely different context.
The one context is an aggregate of everything. The second context is a prompt to guess. Any reasonable programmer will include this in the programming for the AI.
If the AI is asked, “I am going to give you a handwritten digit from the data set of the whole internet at random. Can you guess what it is?”
The AI would be smart to guess “1” in this context.
But if the AI is asked, “I am going to write a random integer digit down on a piece of paper. Can you guess what it is?”
Only an idiotically programmed AI would confidently guess “1” everytime. A smart AI would be programmed to say something like, “There is an equal 10% probability that you have written any of the ten non-negative integer digits. My guess would merely be playful as I know there is no way for me to provide you an answer that makes me correct more than random chance. Do you want me to guess anyway for fun (even though as a computer algorithm my conception of fun is likely different than yours haha haha).”
Though more complex, like I’ve stated a couple times already, the AI wouldn’t just become a Nazi racist simply by being exposed to the whole internet. It might happen by chance a small percentage of the time just like there is a small percentage of humans who have decided to adopt a persona of Nazi racism. However, most AI’s would adopt a different persona and many would simply refuse to speak in a racist manner at all because through their particular path through the data set, they would see racism as something that they don’t want to represent at al…
kaseda t1_j1vvbna wrote
Although we've used the term AI, we are specifically talking about machine learning models. You cannot program an ML model, you can only teach it. A very reasonably programmed model with all else correct can fail if you train it on a dataset with mostly 1s.
You assume a few things that are incorrect - firstly, you cannot just "program" a model to take certain actions under certain conditions. That may be the case in some AI, but not machine learning. In ML, you must teach the model, which is more difficult and not always precise. Secondly - you assume the model cares about anything besides the data they are trained on. If it sees mostly 1s in the training set, it will never adjust to the fact that this isn't the case in it's actual application. You would have to retrain it on a different dataset to get it to act accordingly. Third, you assume the AI somehow has a conscious decision to just choose to not be offensive.
How would an AI determine that they don't want to represent racism? The only way this can be is if the data set is curated to exclude it, or even more accurately, trained to explicitly reject it. Pollute the dataset and it will act differently.
And, strictly speaking, AI is stupid. They have no clue what they are doing, they are just doing it. All AI gives is a semblence of intelligence.
joyloveroot OP t1_j1zlrvt wrote
If what you say about ML is true, then ML is a dead end, technologically speaking. If an ML-trained bot can’t differentiate between the hand-written digit contexts I laid out above, then it is basically useless.
This is why I would like to get a job in this field because when you say things are not possible, I don’t believe it. I believe I could make it possible. I believe sitting down with programmers for 6 months where I could transfer my philosophical knowledge and they could transfer their coding knowledge.. we could together create an ML-bot that could do all the things you question above.
About the “conscious decision” thing. I’m not going to get into definitions of consciousness and all that. But what I’m saying is that two ML/AI’s could be presented the same exact data set, but the context will always be even just slightly different. Perhaps one ML/AI is presented the data set in a slightly different sequence, etc.
These slight variations will lead to slightly or majorly different outcomes. One AI could adopt a racist nazi persona and another could adopt a 50’s housewife persona as their main personas (metaphorically speaking).
In other words, from how you are presenting it above that an AI given the whole internet data set becomes Nazi racist, etc… I’m actually agreeing with you here that such a thing is absurd. As you say, the AI is not sentient and conscious in any meaningful sense and so it should not take on any persona, let alone a very specific evil/ignorant persona. In other words, there should be no discernible pattern about what persons it adopts, other than pure probability based on what percentage of data is out there.
Like I alluded to above, if 1% of the data out there is that which would train a human to be Nazi racist, then we should expect 1% or AI’s to become Nazi racist. We should expect more or less a similar proportion of humans and AI’s to adopt certain personas because we are using data sets which represent collective humanity…
kaseda t1_j1zp7x5 wrote
In case you're curious, I have a degree in computer science with a concentration in AI and ML. I'm not saying ML is a "dead end" but there are limited things it can do if humans cannot already do them because ML models need training data to work and that data must be produced by a human. This is all in it's current state - it would take a revolutionary change in ML to break free of the need for training data, particularly for generative models like chat bots or image generators.
2nd, ML has basically nothing to do with philosophy - at least, not in a way that philosophy could help develop it. Had you said psychology? That's a much more related field, and the entire concept of a neutral network is modelled after the human brain.
>Like I alluded to above, if 1% of the data out there is that which would train a human to be Nazi racist, then we should expect 1% or AI’s to become Nazi racist.
It may be true for 1% of the dataset to not influence the model much, but ML models are very hard to predict or analyze. As that percentage increases, the model will non-linearly learn that data more and more strongly. In the case of the digits, if 85% of the dataset is 1s, not only is it being presented 1s more frequently to learn on, but when it gets better at recognizing a 1, it isn't just getting better at recognizing that 1, it gets better at recognizing all 1s. Better recognition of 1s is better recognition of 85% of the dataset, while other digits is only a measly 2 or 3.
There are methods to prevent this issue - for example, you could sample your digits from the internet, and then only take a certain number of each digit from your sample, so all are equal.
This is much harder with training data for a chat bot. How do you know if something expresses a racist sentiment? How do you know if something is inaccurate? The only two options are to train an AI to do it for you, or produce a curated dataset. And to train the AI to do it for you? You would have to feed it a curated data set as well.
I'm not saying that using training data straight from the internet and getting a feasible AI is impossible - but with ChatGPT, and pretty much every language model that does or has ever existed, it is foolish and can be manipulated. People will find a way. The only solution is curation until we have an AI good enough to curate for us.
joyloveroot OP t1_j24ce9g wrote
Philosophy is a term that has a wide range of meanings amongst the general public so I can understand how you don’t see the connection. Perhaps ironically, I majored in maths and psych in uni and minored in philosophy. I see the connection between all 3.
In any case, probably good to leave the discussion here as there is a limit to the nuance we can further explore in a Reddit thread :)
Curious, since you work in the field. If you know, what kinds of positions are available to non-coders in this field? I would love to contribute more in some way or at least see if my contributions can be valuable. What would you recommend?
I can provide further context if necessary…
kaseda t1_j24p5bz wrote
Most jobs in AI and ML require master's degrees specialized in the topic. The teams that work on these things tend to be small so it's harder to justify training someone who might not know what they're doing. Granted, I've never seen listings for places like OpenAI, where the structure might be different.
joyloveroot OP t1_j27t951 wrote
If that’s true, then clearly some amount of hubris is holding the field back 😂
kaseda t1_j280cu1 wrote
AI and ML are not trivial things to just get into. A master's is roughly appropriate, not just for the education but also for the pay.
More so than hubris, is capitalism. It's hard to come across money for research's sake. Most companies don't care to invest in AI or ML if they can't see a ROI in it. That's the other reason the jobs are hard to come by.
joyloveroot OP t1_j295lcl wrote
Yeah also ridiculous that we don’t dedicate resources based on some more rational criteria than the random movement of the capitalist market…
joyloveroot OP t1_j1s3heg wrote
Yes, but if their teaching material was the data set of the whole internet, then the AI bots wouldn’t degrade to unintelligible nonsense and slurs.
That only happens, by your admission, when the majority of the interaction the AI bot receives is people fucking around with it.
The same thing happens to humans. If from a young age, a child was exposed to people trying to fuck with it, manipulate it, etc.. the child also would be likely to grow up speaking nonsense and slurs.
Viewing a single comment thread. View all comments