MassiveIndependence8

MassiveIndependence8 t1_iyuew5r wrote

Not really, it reasons really well. I test drove it last night, writing some code with it and it gave me a pretty accurate response based on the prompt I gave it. I told it to write me a Python code that generate a bunny jumping around in the terminal, mind you I Google it beforehand to make sure that there’s no clear or easy answer to it. It gave me a piece of code that did exactly that, albeit I had to debug it (also through its aid). This process, especially the debug part would’ve cost me hours of my time but since it can actually somewhat understand what it read, it can spit back to me the relevant information and it even apply it to the specific context of my program. It codes better than most undergrads, so no, it’s definitely not just Google because Google cannot give me the answer specifically to my question and how to apply that piece of knowledge to a specific circumstance.

2

MassiveIndependence8 t1_iw1egj2 wrote

>The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.

Nope, they failed because there’s not enough data and the strategy is not computationally viable. They did however, have the “basic system down”, it’s just not very efficient from a practical standpoint. A infinite neural net is mathematically proven to be able to converge to any continuous function, it’s just that it does it in a very lengthy way and without providing much certainty on how accurate and close we are. But yes, they did have A basic system down, they just haven’t found the right system yet. All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself. So no, we do not need to structurally know the fundamentals of how a human mind works, we however, needs to know the fundamentals of how such mind might be created.

We are finding ways to make the “fetus”, NOT the “human”.

Also, “emotions”, depending on your definition certainly does come into play in the creation of AI, that’s the whole point of reinforcement learning. But the problems lies in what the “emotions” are specifically catering to. In humans, emotions serve as a directive for survival. In machines, it’s a device to deter the machine from pathways that results in a failure of a task and to nudge itself towards pathways that are promising. I think we both could agree that we can create a machine that solve complicated abstract math problems without needing it feeling horny first.

1

MassiveIndependence8 t1_iw19s7z wrote

That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around? After all, without even knowing how to play Go or know how human mind works when playing Go, researchers have created a machine that exceed far beyond what humans are capable of. Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment. We are only drawing parallel to ourselves because we are the only AGI that we know of but we are not the only kind of AGI that is possible out there, not to mention our brains are riddled with artifacts that are meaningless in terms of true intelligence in the purest sense since we are made for survival through evolutions. Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI. We don’t expect the machines to be like us, it will be something much more foreign like an alien. If it can somehow be smart enough, it would look at us just like how we would look at ants, two inherently different brain structures but yet one is capable of understanding the other better. It doesn’t need to see the world the way we do, it only needs to truly see how simple we all are and pretend to be us.

1

MassiveIndependence8 t1_ivprern wrote

We don’t need to recreate a complicated structure to prove that we can create agi, we only need to find that jump start seed for the structure to manifest itself, just like with a neural net. No one have tried to use a nn to simulate a cockroach brain so far because no one really gives a shit about cockroaches. We however cares about art so we created ai art generator that we ourselves couldn’t exactly understand its inner structure.

1

MassiveIndependence8 t1_iulhcka wrote

It only takes one to tip everything over. It’s like the atomic bomb, I imagined had the atomic energy been discovered in a less urgent time (WW2) the kind of debates that could arose would’ve been the exact same as what you’ve mentioned above. And just like the atom bomb, it only takes one country to heed to the call of power for everyone else to follow, as it would be too much of a threat not to do something. If China amassed an army of cyborgs, capable of processing, transferring information faster than any living thing and stronger than any organism we’ve seen before then it is simply a matter of survival for the US to join and enter the biotech race. Politics would die and realpolitik will emerge.

5

MassiveIndependence8 t1_irscqub wrote

There’s nothing inherently “supernatural” about being biological, funnily enough it’s the most “natural” thing out there. Pedantry aside, I understand where you’re coming from so I’ll take a crack at your argument. You seems to have a problem with equating 2 sets of characteristic from 2 inherently different structure. After all, machines aren’t made from what we are made out of, and aren’t structured the same way that we are then how can we compare the two traits of seemingly differently machines and assert that they are some how equivalent? How can we be sure that their “consciousness“ or if we can call it consciousness at all, is similar to our consciousness? If you define consciousness this way and confine it to biological structure then sure, I agree that consciousness can never be arisen from anything that is not biological.

But that’s not a very helpful definition. Say a highly intelligent group of aliens were to come down on earth and we discovered that they are a silicon based life form as opposed to our carbon life form. Even worse, we realized that their biology and their brain structures are wired differently than we are. Would you then assert that these being have no consciousness at all, seemingly because they are different than us? A whole race of species with science, art and culture that “seems” like they can feel pain, joy and every emotion out there are simply automatons?

Before you brush this off as a stupid hypothetical, this does present an interesting fact and the dilemma that comes with it.

Every functions out there can be modelled and recreated with neural networks

That is a fact that was mathematically proven to be true, you can read up on this on your free time but the main point I’m trying to make is that the human mind, just like anything else in the universe, is a function. A mapping from a set of input the a set output. We temporally map our perceptions (sight, hearing, taste,…) into actions in the same way that a function maps an input to an output. Give a Turing machine enough computation power, it can simulate the way human behaves. It’s only a matter of time and data until such machine exists.

But are those machines actually “conscious“? Sure they act like we do in every scenarios out there because they are functionally similar. But they aren’t like us because they aren’t using the same hardware components to compute, or even worse they might not even perform the same computation as we do. They might arrive at the right answer but they could do it differently than we do.

So there’s 2 side of the arguments depending on the definition that you use. I’m on the side of “if it quacks like a duck then it is a duck”. There’s no point in arguing about nomenclature that distinguish something that is essentially indistinguishable to us from the outside.

10