Viewing a single comment thread. View all comments

havenyahon t1_iw1bwsb wrote

>That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around?

You mean apart from the entire history of AI research to date? Do you understand how many people since the 50s and 60s have claimed to have "the basic system down, we now just need to feed it with data and it will spring to life!" The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.

>Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment.

Sure, there may be other ways to achieve intelligence. In fact we know there are, because there are other animals with different physiologies that can navigate their environments. The point, again, is that we don't have an understanding of the fundamentals. We're not even close to creating something like an insect's general intelligence.

>Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI.

I don't mean to be rude when I say this, but this is precisely the kind of naivety that led those researchers to create systems that failed to achieve general intelligence. In fact, as it turns out, emotions appear to be essential for our reasoning processes. There's no reasoning without them! As I said in the other post, you can see the work of the neuroscientist Antonio Damasio to learn a bit about how our understanding of the mind has changed thanks to recent empirical work. It turns out that a lot of those 'artifacts' you're saying we can safely ignore may be fundamental features of intelligence, not incidental to it.

1

MassiveIndependence8 t1_iw1egj2 wrote

>The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.

Nope, they failed because there’s not enough data and the strategy is not computationally viable. They did however, have the “basic system down”, it’s just not very efficient from a practical standpoint. A infinite neural net is mathematically proven to be able to converge to any continuous function, it’s just that it does it in a very lengthy way and without providing much certainty on how accurate and close we are. But yes, they did have A basic system down, they just haven’t found the right system yet. All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself. So no, we do not need to structurally know the fundamentals of how a human mind works, we however, needs to know the fundamentals of how such mind might be created.

We are finding ways to make the “fetus”, NOT the “human”.

Also, “emotions”, depending on your definition certainly does come into play in the creation of AI, that’s the whole point of reinforcement learning. But the problems lies in what the “emotions” are specifically catering to. In humans, emotions serve as a directive for survival. In machines, it’s a device to deter the machine from pathways that results in a failure of a task and to nudge itself towards pathways that are promising. I think we both could agree that we can create a machine that solve complicated abstract math problems without needing it feeling horny first.

1

havenyahon t1_iw1eui3 wrote

>All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself.

Yeah it all sounds pretty familiar! We've heard the same thing for decades. I guess we'll have to continue to wait and see!

1