Kafke
Kafke t1_iyepxxx wrote
Reply to comment by HoodooMidnight in What do you guys and gals think of the new default skins for Minecraft? (ver 1.9.50) by AlyksTheSage
Just pointing out the hypocrisy. "Diversity" but never actually seems to include a particular demographic.
Kafke t1_iyc0x27 wrote
Reply to comment by Ordinary-Flounder675 in What was the scariest game you have played? by Ordinary-Flounder675
I see. I'll have to check it out then. Red matter 1 is easily one of my favorite vr games even though I was a bit uneasy with it. It's good to hear a lot of the stuff I liked about the first one got even better.
Kafke t1_iyc067n wrote
Reply to comment by Ordinary-Flounder675 in What was the scariest game you have played? by Ordinary-Flounder675
No I mean in terms of how scary it is haha. The first one was already to the point where I had to be reassured that nothing was gonna jump at me lol. Hearing there's action and possibly being able to die and such in the second one I think that would be too much for me.
Though, I definitely did get a bit motion sickness with the default movement option...
Kafke t1_iybyavh wrote
Reply to comment by Ordinary-Flounder675 in What was the scariest game you have played? by Ordinary-Flounder675
I've only played the first one. I've been kinda hesitating to get the second one since I heard there's some shooter mechanics and I feel like it might be a bit too much for me. I could hardly stomach the first one, even though it was a fantastic game.
Kafke t1_iybtw6h wrote
I generally try to avoid scary games, but the scariest one I've played tbh has been Red Matter on oculus quest 2. There's nothing overtly scary about it, but it has a damn creepy vibe to it.
Kafke t1_iybec8j wrote
Reply to What do you guys and gals think of the new default skins for Minecraft? (ver 1.9.50) by AlyksTheSage
Got a bingo for "diversity bingo". No white woman with brown hair.
Kafke t1_iws9po9 wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Again, you completely miss what I'm saying. I'll admit that the current approach to ML/DL could result in AGI when, on it's own volition and unprompted, the AI asks the user a question, without that question being preprogrammed in. IE the AI doing something on it's own, and not simply responding to a prompt.
> A chess engine is an agent
Ironically, a chess program has a better chance of becoming an AGI than the current approach used for AI.
> As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.
Continual learning won't solve that. At best, you'll have a model that updates with use. That's still static.
> There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".
It's not that they're "bad at X" it's that their architecture is fundamentally incompatible with X.
> There are other interesting DL approaches that look nothing like the next token prediction.
Care to share one that isn't just a matter of a static machine accepting input and providing an output? I try to watch the field of AI pretty closely and I can't say I've ever seen such a thing.
> Do you believe that a computer program - a code being run on a computer, can be generally intelligent?
Sure. In theory I think it's definitely possible. I just don't think that the current approach will ever get there. Though I would like to note that "general intelligence" and an AGI are kinda different, despite the similar names. Current AI is "narrow" in that it works on one specific field or domain. The current approach is to take this I/O narrow AI and broaden the domains it can function in. This will achieve a more "general" ability and thus "general intelligence", however it will not ever achieve an AGI, as an AGI has features other than "narrow AI but more fields". For example, such I/O machines will never be able to truly think, they'll never be able to plan, act out, and initiate their goals, they'll never be able to interact with the world in a way that is unlike current machines.
As it stands, my computer, or any computer, does nothing until I explicitly tell it to. Until an AI can overcome this fundamental problem, it will never be an AGI, simply due to architectural design.
Such an AI will never be able to properly answer "what have you been up to lately?". Such an AI will never be able to browse through movies, watch one on it's own volition, and then prompt a user about what it has just done. Such an AI will never be able to have you plug in a completely new hardware device into your user, and be able to figure out what it does, and be able to interact with it.
The current approach will never be able to accomplish such tasks, because of how the architecture is designed. They are reactive, and not active. A true AGI will need to be active, and be able to set out and accomplish tasks without being prompted. It'll need to be able to actually think, and not just respond to particular inputs with particular outputs.
Kafke t1_iwq3sbf wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
You wrote a lot but ultimately didn't resolve the problem I put forward. Let me just ask: has such an AI ever prompted you? Has it ever asked you a question?
The answer, of course, is no. Such a thing is simply impossible. It cannot do such a thing due to the architecture of the design, and it will never be able to do such a thing, until that design is changed.
> I've actually done this.
You've misunderstood what I meant. If I ask it to go find a particular youtube video meeting XYZ criteria, could it do it? How about if I hook it up to some new input sensor and then ask it to figure out how the incoming data is formatted and explain it in plain english? Of course, the answer is no. It'll never be able to do these things.
As I said, you're looking at strict "I provide X input and get Y output". Static. Deterministic. Unchanging. Such a thing can never be an agent, and thus can never be a true AGI. Unless, of course, you loosen the term "AGI" to just refer to a regular AI that can do a variety of tasks.
Cramming more text data into a model won't resolve these issues. Because they aren't problems having to do with knowledge, but rather ability.
> For example, I created a prompt where I add two 8 digit numbers together (written in a particular way) in a stepwise digit by digit fashion, and explain my every step to the model in plain language. I then ask it to add different two numbers together, and it begins generating the same explanation of digit by digit addition, and finally arriving at the correct answer.
Cool. Now tell it to do it without giving it the instructions, and wait for it to ask for clarification on how to do the task. This will never happen. Instead it'll just spit out whatever the closest output is to your prompt. It can't stop to ask for clarification, because of how such a system is designed. And no amount of increasing the size of the model will ever fix that.
Kafke t1_iwppsn1 wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Ah sorry. I'm referring to the entire field of deep learning. Every model I've witnessed so far has just been static input->output machines with the output adjusted per weights that are trained. This approach, while good for mapping inputs and outputs, is notoriously bad at a variety of cognitive tasks that require something other than a single static link. For example, having an AI that learns over time is impossible. Likewise any sort of memory task (instead, it must be "hacked" or cheated by simply providing the "memories" as yet another input). Likewise there's no way for the AI to actually "think" or perform other cognitive tasks.
This is why current approaches require massive datasets and models, because they're just trying to map every single possible input to a related output. Which.... simply doesn't work for a variety of cognitive tasks.
No amount of cramming data or expanding the models will ever result in an AI that can learn new tasks given some simple instructions and then immediately perform them competently like a human would. Likewise, no amount of cramming data or expanding models will ever result in an AI that can actually coherently understand, recognize, and respond to you.
LLMs no matter their size suffer from the exact same problem and it's clear as soon as you "ask" it something that's outside of the dataset. The AI has no way of recognizing that it is wrong, because all it's doing is providing the closest output to your input, not actually understanding what you're saying or prompting.
This approach is pretty good at extension tools like what we see with current LLMs, along with things like text2image, captioning, etc. which is obviously where we see AI shining best. But ask it literally anything that can't be a mapped I/O, and you'll see it's no better than AI 20-30 years ago.
Kafke t1_iwp95fv wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
All it takes is an understanding of how AI currently works to realize that the current approach won't ever reach AGI. There are inherent limitations to the design, and so that design needs to be reworked before certain things can be achieved.
Kafke t1_iyerrzj wrote
Reply to comment by starbitcandies in What do you guys and gals think of the new default skins for Minecraft? (ver 1.9.50) by AlyksTheSage
It's the opposite, actually. There isn't enough diversity. Can't find myself in there, so they failed.