Submitted by TwitchTvOmo1 t3_113xycr in singularity
Cryptizard t1_j8t7nw5 wrote
I think you would be surprised how bad that would turn out. Imagine if someone talked to you the way that GPT writes its responses. It looks okay in written form, but it is not at all how people talk. It would be serious uncanny valley.
TwitchTvOmo1 OP t1_j8t85gq wrote
You have to remember that LLMs currently talk that way because it's just the default way their creators thought they should respond with. I don't see why it would be an issue at all to "fine-tune" any of these LLMs to write with a specific style that would sound more casual and normal. It's not that it's a limitation, they're just explicitly avoiding it for the current scope of applications.
In fact, in these AI LLM "games" that I'm envisioning, you would ask the AI to adopt certain styles to emulate certain social situations. Like ask it to pretend it's an angry customer and you have to convince it to come to a compromise (In the future I see AI services like these being used in job interviews for example to evaluate a candidate's skill). Or pretend it's your boss and you'll negotiate a salary increase. Pretend it's a girl that you're about to hit on, etc.
Social interaction and social engineering are about to be minmaxed just like you minmax your dps in a game by spending 10 hours in practice mode.
After a few years, practising social situations with an AI will be considered primitive as there'll be hardware "cheats" like let's say regular looking glasses that have a mini processor and mic, who are listening to what others around you are saying, and are generating the optimal response based on what it knows about that person's personality, current emotional state, and your end goals.
Admittedly I know nothing about the field but I highly doubt this is currently outside what we can do. It's just that nobody tried yet.
Cryptizard t1_j8t9b1m wrote
>it's just the default way their creators thought they should respond with
No, that's not right. Nobody programmed the LLM how to respond, it is just based on training data. It is emergent behavior.
>I don't see why it would be an issue at all to "fine-tune" any of these LLMs to write with a specific style that would sound more casual and normal.
You can try to ask it to do that, it doesn't really work.
>Admittedly I know nothing about the field
Yeah...
ShowerGrapes t1_j8telec wrote
>No, that's not right. Nobody programmed the LLM how to respond, it is just based on training data. It is emergent behavior.
while you're right, i do think it's a matter of clarifying and discretely organizing training data. there's a reason data management has been an emerging tech juggernaut in the last decade. there may be a plateau there somewhere but i don't think we've reached it yet.
my guess is we'll soon have different "modes" of translation and interaction plus a suite of micro-genre, very specified neural networks like a purely medical one for example. making data segregation easier with the added bonus that some of them are varied in when they need retraining. a subsciption program with small micro-transactions to access various genre of neural networks would be the tech-bro's wet dream.
TwitchTvOmo1 OP t1_j8taffb wrote
>No, that's not right. Nobody programmed the LLM how to respond, it is just based on training data. It is emergent behavior.
So if it was trained with no guidance/parameters whatsoever, what stops us from giving it parameters to follow certain styles? Nothing. It just makes more sense to start with a generalized model first before attempting to create fine-tunes of it that solve different problems. Many LLM providers like OpenAI already provide a "fine-tuning" api where you can submit labeled example completions to fine-tune your own version of their LLM.
And that's what I mean by fine-tuning. Fine tuning isn't asking the default model to behave in a certain way. You're not "editing" the model. Fine tuning is re-training the model with specific parameters.
Eventually larger models will be able to encompass different styles and you won't have to specifically create smaller fine-tuned versions of them. Technically you already could ask ChatGPT to act angry or talk like a nazi or pretend it's X person in Y situation etc, but the devs specifically restrict you from doing so. An earlier example of a way more primitive chatbot that didn't have such restriction is the shitstorm twitter bot that started talking like an anti-semitic 4chan user.
Here's another article by openAI from just today, describing pretty much what I just said.
>We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.
gantork t1_j8tjg0r wrote
Checkout AtheneWins on youtube, they are "cloning" streamers and famous people and doing a podcast where they ask them questions, fine tuning GPT3 and hooking it up with a tts, might be ElevenLabs. The results are amazing.
TwitchTvOmo1 OP t1_j8tkalz wrote
Thanks, looks fun
Viewing a single comment thread. View all comments