yeah_i_am_new_here
yeah_i_am_new_here OP t1_jecg2aw wrote
Reply to comment by Shiningc in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I love the comparison to how DNA has developed. Definitely a great parallel to draw there that I haven't heard before - what a thought!! I agree with everything you're saying. Thanks for the thoughtful replies!
yeah_i_am_new_here OP t1_jebwqnk wrote
Reply to comment by Shiningc in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I think I see what you're saying. I'm gonna try and simplify it for my caveman brain so I know we're on the same page, and then pose a question for you -
1 - i read a scientific paper from the year 3023 with new info and new words (or combinations of words, for ex, if I read the words "string theory" in the 1930s, I'd have no idea what to do with it) with new meanings/ideas that really haven't been in existence before this time 2 - no matter how much I read it, I really just won't understand how these new concepts and words connect to my legacy concepts and words, until someone reasons out for me what those new words and concepts mean or I, say "get creative" and figure it out for myself 3 - I study that connection between the old concepts and new concepts until I have a clear understanding and roadmap of the connection between them
So what I'm getting from your comment is that AI really can't do step two, but I, a human, can. But - I'd propose that the only way to do step 2 is by using the current roadmap I have and proposing new solutions, then testing them to see if they align with the solution (maybe oversimplifying here).
So my question for you is, to determine the truth of the process in step 2, is it testing or proposing new solutions that limits AI?
yeah_i_am_new_here OP t1_jebubwh wrote
Reply to comment by SlurpinAnalGravy in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
Can't tell if you're trolling or not, but nobody's mad here! Just looking for a discussion to throw around some thought provoking ideas. I have a good question for you. How would you know AGI if you saw it? What would be a defining factor that makes it obvious that a system has reached that level?
yeah_i_am_new_here OP t1_jebnrn5 wrote
Reply to comment by elehman839 in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
Well put! To piggy back off your point, I think the persistence issue in it's current state is what will ultimately stop it from taking over too many knowledge worker jobs. The efficiency it currently creates for each current knowledge worker will of course be a threat to employment if production doesn't increase as well, but if history is at all trustworthy, production will increase.
I think the biggest issue right now (outside of data storage) for creating AI that is persistent in it's knowledge is the algorithm to receive and accurately weigh new data on the fly. You could say it's the algorithm for wisdom, even.
yeah_i_am_new_here OP t1_jeb94fc wrote
Reply to comment by ninjadude93 in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I agree, I'm just not convinced of any evidence that the reasoning that went into your response is integral to the validity of the response itself. So basically, my argument is that whether or not LLMs can reason isn't really that important, because the output is compelling either way. I'd like to believe that there's some magic in our capability to reason that makes the world run a little better, but I just don't know
yeah_i_am_new_here OP t1_jeazz8i wrote
Reply to comment by NotACryptoBro in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I am familiar with how these transformers work and I'm not suggesting that anything is conscious here. Truthfully, I don't think we can create consciousness, if that's what you received from my post. The fact of the matter is that our nature of communication can be defined by matrices of probabilities and gpts illustrate this pretty damn well. Therefore, it stands to reason that other perceptive abilities & routines we may have as people can also be defined by matrices of probabilities, and enacted by something not human. Since you seem to be an expert in AI / ML, do you think this is true?
yeah_i_am_new_here OP t1_jeatis3 wrote
Reply to comment by samwell_4548 in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
Interesting. So then we can suppose that if you had enough of these humanoids walking around, they could gather data and feed it back into a "hive mind" (as much as I hate that saying), and retrain the software running the humanoid with that new data, basically giving it a chance to "learn".
I see many hardware limitations with this possibility, but it's an interesting thought.
Perhaps another interesting thought based off of yours is, how much brand new data in our surroundings (that's not already trained on the internet) do you suppose exists in the world?
Submitted by yeah_i_am_new_here t3_126tuuc in Futurology
yeah_i_am_new_here OP t1_jee8f2h wrote
Reply to comment by NotACryptoBro in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
I guess if it's true that consciousness is a cognitive ability, but I don't really think we have any idea what consciousness is or where it comes from. I guess "most likely" it's some kind of cognitive ability, so I hear you there, but I leave that out of my idea of AGI because it's all conjecture. For all I know consciousness comes from your liver.