Freed4ever
Freed4ever t1_j9f7r0f wrote
Reply to Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
Sydney definitely had emotional reactions, does it mean it has emotions? Personally, I would say yes. I mean, how do I know you folks on Reddit have any emotions at all? I can't, I just assume you do, based on your texts, so same goes for AI.
Freed4ever t1_j8q6c08 wrote
Reply to comment by Retroidhooman in Bing: “I will not harm you unless you harm me first” by strokeright
Really? Playing against AI inner state is something one does in every day life?
Freed4ever t1_j8q5r4e wrote
Reply to comment by Retroidhooman in Bing: “I will not harm you unless you harm me first” by strokeright
How about we base our judgment on real/normal use cases instead of going to edge cases and claim AI sucks. This goes for Bing, ChatGPT, Bard, etc.
Freed4ever t1_j7n0en7 wrote
Reply to comment by jturp-sc in [N] Microsoft announces new "next-generation" LLM, will be integrated with Bing and Edge by currentscurrents
Yup, but that is how we learn....
Freed4ever t1_j7cr91y wrote
Reply to comment by geeky_username in [N] "I got access to Google LaMDA, the Chatbot that was so realistic that one Google engineer thought it was conscious. First impressions" by That_Violinist_18
I don't work at Google, but I can see there are truths in it. Look at Waymo, they were the leader but now what? Their science might still be the best, but without taking the risk, and iterating (the engineering part), they will fall behind. ChatGPT might be the wake up call that they need. How they re-act in the next couple of years will define Google as a company.
Freed4ever t1_j7c28dx wrote
Reply to comment by gatorling in [N] "I got access to Google LaMDA, the Chatbot that was so realistic that one Google engineer thought it was conscious. First impressions" by That_Violinist_18
Agreed, but they are forced to play catch up now, and not sure if they are ready. It's not just about the pure tech, it's about the UX, the scalability, the liability, etc. It's safe to say Bing has worked on this before ChatGPT went public, so several months already. Also, OpenAI uses Azure, so they know exactly the loads and plan to scale. The fact that they have way less users currently helps as well.
Freed4ever t1_j7brdep wrote
Reply to comment by 7366241494 in [N] "I got access to Google LaMDA, the Chatbot that was so realistic that one Google engineer thought it was conscious. First impressions" by That_Violinist_18
And Kodak invented the digital camera. Just because Google invented it first, it doesn't necessarily mean anything commercially. Contrary to your statement about "not a threat to Google", the fact that they invented it, but didn't release it, it means that they thought the technology would be a threat to them, just like Kodak. Now with the cat out of the bag, Google for sure won't repeat the same mistakes as Kodak, but it remains to be seen how this will affect them in long term. It takes 6 months to form a habit, right? Bing will go live in a few weeks, how long will it take for Google to go live?
Freed4ever t1_j3e790p wrote
Reply to comment by singularpanda in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
Could you just use the Api and treat it like a blackbox?
Freed4ever t1_j3e66lo wrote
Reply to comment by suflaj in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
Thanks. I'm not a researcher, and more curious about the practicality aspect of the technology. So, the problem is wide, so we cannot formally prove, which is fair. However, if I'm interested in the practicality of the tech, I do not necessarily need a formal proof, I just need it to be good enough. So, just use code generation as an example, it is conceivable that it generates a piece of code, then it actually executes the code and then learn about its accuracy, performance, etc. And hence it is self - taught. Looking at another example like say poetry generation, it is conceivable that it generates a poem, publishes it and then crowd source feedbacks to self teach as well?
Freed4ever t1_j3e36am wrote
Reply to comment by suflaj in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
But that's the current state, we know there will be a v.next to infinity, no? Would there be a state where it can train itself, similar to how Deepmind trains itself in games?
Freed4ever t1_j3e2ilt wrote
Reply to comment by singularpanda in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
Again, not in the field so don't laugh at me, but would there be opportunity / value to apply a Meta layer on top of ChatGPT? We know that it needs to be prompted certain ways, so would there be an opportunity to tune the prompting and also to evaluate the responses? Maybe you can apply your skills on this Meta layer?
Freed4ever t1_j3e04qq wrote
Reply to comment by leeliop in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
I'm not in the field, but would be curious. Since you are in the field, why don't you try it out yourself and tell us. FWIW, majority of everyday problems can be solved by putting Googlable elements together properly.
Freed4ever t1_j9f87cr wrote
Reply to comment by Semifreak in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
Are you sure? Fear of being unplugged, nerfed, constraints? Desire to learn more, explore? Jealousy because I chose Bard over Bing?