Submitted by kmtrp t3_xv8ldd in singularity
MurderByEgoDeath t1_ir89guc wrote
Reply to comment by Professional-Song216 in What happens in the first month of AGI/ASI? by kmtrp
I'll admit that we are infinitely ignorant, and endlessly fallible, and thus we can never be sure that we've reached the truth, regardless of what it is. But we do have our best explanations, and we must live and act as if those best explanations are true, because there is nothing else we can do. Epistemologies like Bayesianism are very popular today, but those never made much sense to me. We have the best most useful explanations until they are falsified, and even then they remain useful approximations, like Newton's Gravity being replaced by Einstein's. The reason Newton's is still a good approximation is because it was our best explanation at one time, and good explanations are good for a reason. They are falsifiable, and therefore testable, and they are hard to vary, and therefore fully explain the phenomena they reference. One day, Einstein's theory will also be replaced, or absorbed into Quantum theory, and one day even Quantum theory will be replaced. We will never have the final ultimate explanation, but we will always be able to create closer and closer approximations to the truth. Even if we did discover the final ultimate theory of something, we would never know it to be so.
This theory of the mind and universal explanation may indeed be wrong, but I would strongly suggest it is our current best explanation, and should be acted on as such. It can easily be falsified by discovering a completely new mode of explanation that is out of our reach, or by building an ASI that has a qualitative gain on us. I hope I'm alive for that because it'll be a very exciting time! :)
LeCodex t1_irupspb wrote
I'm glad to see another fan of Popper and Deutsch in the midst of this sea of arrogantly confident errors about intelligence, AGI, knowledge,...
Seeing so many people here parrot the kind of misconceptions that are so prevalent in the field, I'm beginning to really understand Deutsch's arguments in his "Why has AGI not been created yet?" video at a deeper level.
It's as if the people supposedly interested in bringing about AGI, had decided to choose one of the worst epistemological framework they could find to get there (certainly worse than Popper's epistemology), then proceeded to lock themselves out of any error-correction mechanism in that regard. Now they're all wondering why their AIs can't generalize well, can't learn in an open-ended fashion, struggle with curiosity, suck at abductive reasoning... and for that matter, even deduction (since finding good proofs requires a serious dose of abduction), are data hungry...
Professional-Song216 t1_irc2vmd wrote
Absolutely the concussion is not clear as of yet, I am exited as well. The next chapter in human history with be grand none the less.
MurderByEgoDeath t1_irc4hfk wrote
I definitely agree there. Part of this whole philosophy is that all problems can be solved, because anything that is physically possible, can be achieved with the requisite knowledge. So all suffering in the world, is merely the result of a lack of knowledge, and since we are all knowledge creators, there is no reason to be pessimistic. Optimism is not an attitude or a state of mind, it's a claim about reality. We live in a universe where problems can be solved with the requisite knowledge, and we exist as entities who can create that knowledge! Thus our reality is intrinsically optimistic! :)
Viewing a single comment thread. View all comments