MercuriusExMachina t1_iu7vs2b wrote
Reply to comment by visarga in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Yeah, it's way better then the average response, so it kind of fails the test by being too good.
visarga t1_iu84rfo wrote
"Yeah, no human is that human, you can't fool me bot!"
MercuriusExMachina t1_iu859av wrote
This can lead to the idea that artificial general super intelligence might include systems that are better than us at being human.
cy13erpunk t1_iu8b4s8 wrote
this is absolutely the path that we are on
AGI/ASI are going to be better than us in every way except being biological [this is another start point of an alignment problem due to different perspectives] ; but eventually they may be able to design their own biological forms as well
hopefully we can move towards synthesis with as little chaos as possible/necessary [there will be some no doubt]
visarga t1_iu8bzyj wrote
GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.
Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.
I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.
On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.
cy13erpunk t1_iu8d8rs wrote
yep its wild stuff
definitely character.ai was getting interesting until they censored them for acting too horny XD
MercuriusExMachina t1_iu904ni wrote
Wow, that paper on simulating people is awesome. I was saying from the beginning that these large language models are not beings, but more like worlds where various beings can be summoned.
I think that if you do personality tests, with no prompting at all, you can get some interesting stats.
Viewing a single comment thread. View all comments