Shiningc

Shiningc t1_je0eu7w wrote

"General intelligence" is an intelligence that is capable of any kind of intelligence. Sentience is a kind of an intelligence. We have yet to have a sentient AI. Not even close.

It makes no sense for a corporation to release a golden duck laying goose to the public. If they really have an AGI, then they can just use it to produce as much innovations as possible. They can just fire every employees except for a few. People have way too much wishful thinking because they so badly want to believe that people have created an AGI.

1

Shiningc t1_je0e97u wrote

And why would a corporation release an AGI to the public? It's a golden duck laying goose, they would not let their rivals have access to such a thing even if they have it. It makes no sense and people are eating up corporate PR like the gullible fools that they are.

Corporations only release things that "moderately useful", not revolutionary on the scale of AGI.

−1

Shiningc t1_je07kk4 wrote

An AGI isn't just a collection of separate single-stance intelligences or narrow AIs. An AGI is a general intelligence, meaning that it's an intelligence that is capable of any kind of intelligences. It takes more than being just a collection of many. An AGI is capable of say, sentience, which is a type of an intelligence.

2

Shiningc t1_jdz9ore wrote

The point is that it neither flies nor sails. It's basically "cargo cult science" where it only looks like a plane.

>LLMs are capable of completing functions that were previously only solvable by human intellects

That's only because they were already solved by the human intellect. It's only a mimicking machine.

0

Shiningc OP t1_jd2dkvu wrote

Turing came up with a model of a theoretical general-purpose computer called the Turing machine, in which its equivalence is called Turing complete, which pretty much all modern general-purpose CPUs will have to abide by.

>A Turing machine is a general example of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data.

https://en.wikipedia.org/wiki/Turing_machine

1

Shiningc OP t1_jd1d3ns wrote

Again, how would you come up with mathematical axioms with just probabilities?

That contradicts the Gödel's incompleteness theorems, which has been mathematically proven that you cannot come up with mathematical axioms within a mathematical system.

Even if you could replicate the biological neural network which happens to be Turing complete, that still says nothing about programming the human-level intelligence, which is a different matter altogether.

1