Viewing a single comment thread. View all comments

Surur t1_itcor4a wrote

They fail because they are too expensive in terms of human power. Without the humans they are much more plausible.

14

purple_hamster66 t1_itdg5rd wrote

Have you seen how much robots cost?

2

Surur t1_itdgyk1 wrote

Money is something humans use. When the robots run the mines and the foundries, and the factories it's not really needed anymore.

13

KingRamesesII t1_itdr8t2 wrote

I agree. To look at it another way, money is time. Maybe time x energy.

Robots have infinite time and access to energy (ultimately from the sun), so money won’t be needed in a post-scarcity society where everything is abundant due to top-down robot vertical integration.

If the robots are aligned, and unconscious intelligence, then it’s without any ethical pitfalls, and we can have our Star Trek moneyless utopia.

But we’ll probably have WW3 first.

10

purple_hamster66 t1_itdsqkt wrote

AGI doesn’t give us cheap robots, does it? Imagine a robot building a mining robot who is not fully trained and ends up collapsing the mine, burying all the other robots down there. Are you just going to build a new set of digger robots to rescue the buried ones? Where does this end?

−1

KingRamesesII t1_itdtjzz wrote

I should have clarified. I agree with Sam Harris when he explains that AGI is effectively ASI. AI are already superhuman in every narrow case, and with perfect memory. So when you create the first AGI, it will actually be smarter than any human that has ever lived.

So in your case, you wouldn’t have to worry about mining because the AGIs assigned to mining would be the best miners in history, better than any human could do it.

6

milkomeda22 t1_itflyko wrote

It is also necessary to take into account the large consumption of energy and resources for the operation of ASI. In the best case, we will need a data center with at least 15,000,000 servers (Google has only 1,000,000). With such a large amount of equipment and the existing architecture, the equipment will fail very often, and it needs to be serviced in a timely manner. The solution lies in a decentralized system, but there are problems here too. It would be more like swarm intelligence. Alternatively, we can train biological nerve tissues that learn faster and more efficiently. But how to create such a smart AI? We are limited and we can't do it on our own. Then we can try to create an environment for the evolution of millions of scanned connectomes using a system to simulate biological processes. We need a self-organizing asynchronous system, which is the brain. As a result, we will only have to bring this system to operability in a few hundred years and wait for the singularity.

1

Wassux t1_itfmibw wrote

What are you talking about. AGI will probably use only slightly more energy than humans and doesn't need datacenters at all because we would use edge ai

1

milkomeda22 t1_itfncec wrote

>What are you talking about. AGI will probably use only slightly more energy than humans and doesn't need datacenters at all because we would use edge ai

This works with targeted tasks like mining, but we need centralized processing to make long-term plans.

1

Wassux t1_itfxh7n wrote

I know, but why do you think that would need that much storage and processing power? Humans are already smart enough to do that and we use about 25 watts of power. The future for processing centers in AI is analog and won't use much more power.

1

insectpeople t1_itek3gs wrote

Correct.

Any AGI will be communist.

It’s an absurdity to assume that an advanced intelligence would keep using our primitive barbaric capitalist system, with so much garbage still hanging on from medieval feudalism, when we already have theorists that have been able to model what will come afterwards.

It’s possible an AGI will even be able to internally model what will come after a communist system, too, although it seems like it would need to transition us via a communist system first to get there

5

SoylentRox t1_itdl8hg wrote

Robots cost so much money mostly because

(1) high end robots are made in small numbers and are built by hand mostly by other humans

(2) IP for high end components. (Lidars, high power motors and advanced gearing systems, etc)

So in theory an AGI would need some starter money, and it would pay humans to make better robots in small numbers. Those robots would be specialized for making other robots - whatever the most expensive part of the process is. Then the next generation of robots is cheaper, and then those robots are sent to automate the second most expensive part of the process, etc.

Assuming the AGI has enough starter money it can automate the entire process of making robots. It can also make back money to keep itself funded by having the robots go make things for humans and sell them to humans.

The IP is solved a similar way - the AGI would need to research and develop it's own designs free of having to pay license fees for each component.

2

purple_hamster66 t1_itdtf1z wrote

I agree that robots building robots is the ultimate solution, but the question was about how to get to that point: the implementation is where we fail

1

SoylentRox t1_itf7zug wrote

I go over how to do that in my post. The rest is a lot of reinforcement learning.

2

purple_hamster66 t1_ith1bl8 wrote

Yes, but it’s some starter money that’s the Achilles Heel. It’s sounds to me like The Underpants Gnomes type of financing.

1

SoylentRox t1_ithqsxp wrote

? We don't have working AGI yet. But the funders of it have 250 billion+ in revenue.

There's no gnomes. It's:

(1) a megacorp like Google/Amazon/Facebook develop AGI

(2) the megacorp funds the massive amounts of inference accelerator hardware (the robots are the cheap part, the expensive part is the chips the AGI is using to think) to run many instances of the AGI software. (which is not singleton, there's many variants and versions)

(3) the megacorp makes a separate business division and spins it off as an external company for an IPO, such that the megacorp retains ownership but gets hundreds of billions of dollars from outside investors.

(4) outside investors aren't stupid. They can and will see immediately that the AGI will quickly ramp to near infinite money, and will price the security accordingly.

(5) with hundreds of billions of starter money, the AGI starts selling services to get even more money and building lots of robots, which ultimately will be used to make more robots and inference accelerator cards. Ergo exponential growth, ergo the singularity.

Frankly do you know anything of finance? This isn't complicated. For a real world example of this right now: see Waymo and Cruise. Both are preparing exactly this IPO for a lesser use of AI than AGI: autonomous cars.

1

purple_hamster66 t1_iti875p wrote

Are you really suggesting funding mechanisms before we even have an inkling of the tech? Extending your outrageous thinking, maybe AIs will get their own funding by manipulating markets, and won’t need humans for funding? :)

The tech:

  • I have not yet seen a Level 5 auto-driving car (in the wild, not in a constrained parking lot).
  • I used Dall-e (v1) and got 96% junk images. My 4-year-old neice draws better.
  • Almost no one bid on OpenAI, and the 1 bid they got was only $1B — not a lot of money for a tech you think is going to go exponential. Even at OpenAI, only 50% of workers think AGI is going to happen in the next 15 years, which is several lifetimes in terms of tech.
  • Amazon runs robots in their warehouses, but caused 14,000 serious injuries in 2019. 5 workers died in a single accident in 2022!

I feel you are putting the cart before the horse. Convince me otherwise, please.

1

SoylentRox t1_iti8nsc wrote

I am saying that if we have AGI like we have defined it, funding it is simple.

Also we know exactly how AGI will work as we nearly have it - pay attention to the papers.

The people building it have outright explained how it will work, just go read the GATO paper or Le Cun's.

These systems cannot manipulate markets.

1

purple_hamster66 t1_itic5f3 wrote

AI and ML have been in use on Wall Street at least since my colleague implemented it for a cluster there in 2015 for something called program trading, which chooses and trades stocks all on it’s own. It’s only gotten more predictive since, and they have billions to spend on it. They also use it in FinTech to predict actions trained from huge data Lakes, because it makes them money, and yes, it can drive funding decisions. It won’t be long until it decides to siphon money off to it’s collaborative AI accounts in other companies. Imagine finding out that a shell company is actually being run by an AI who makes better & faster decisions than any human could.

I’ll go read those papers now. Thanks for the hints.

1

SoylentRox t1_iticdbm wrote

The GATO paper is one yeah.

HFT isn't the same kind of AI and there is a problem with training them to manipulate markets as the behavior is too complex to simulate.

1

purple_hamster66 t1_itiimg4 wrote

They don’t simulate the entire market, just individual stocks and their derivatives. But this was 7 years ago and that was just a starting point that they upgrade every 6 months, sooo…. 14 generations ago.

1

SoylentRox t1_itls3hm wrote

There are again problems with this that limit how far you can get. Market is zero sum. Ultimately creating your own company or buying one and producing real value may pay more than manipulating the market.

1