Wroisu

Wroisu t1_je7gbqk wrote

Alcubierre metric, unification of GR & QM + making the Casimir effect work on classical scales would probably get you there, maybe. Having more intelligence might help you sus out solutions to problems we can’t solve yet, like the unification of GR & QM… if things like ftl are possible it’ll pop out of whatever unifies those two frame works.

6

Wroisu t1_j9042i4 wrote

Cognitive ability doesn’t translate to immediate R&D, you could think up a trillion ways to do something, each better than the last, but you still have to build the equipment that does the thing you want to do research on etc. for every iteration of your idea.

That doesn’t mean that it won’t be quick, but that these things aren’t magic - as you seem to be suggesting immense intellect would be.

Eventually you get to the point where Isaac Asimov’s “any sufficiently advanced technology is indistinguishable from magic” holds true, but that doesn’t happen over night.

1

Wroisu t1_j9037wv wrote

For the earth, the Gravitational Binding Energy is about 2x10^32 Joules, or about 12 days of the Sun's total energy output, Mr. Big Thinker.

There’s no way an AI would randomly be able to control that amount of energy without us knowing of the mechanisms used to control such energy, let alone seeing the structures built to move that energy around in a useful way.

Not understanding how physics work & thinking that AI will suddenly rewrite it one day is what you get when you browse an echo chamber for your information on such things.

2

Wroisu t1_j901zs1 wrote

The point of the novel(s) is to explore those complex topics, I’m not saying that that’s what it’ll be like but that it gives a perspective on what it could be like.

Similar to star trek & it’s commentary on capitalism, or the three body problem and it’s explanation for the Fermi paradox ad infinitum.

As far as the technology beyond our comprehension, that technology as high and mighty as it may be, will still be based on physical principles we know of.

And even the technology that’s born out of principles we’ve yet to discover will come out of the unification of things we already know, like general relativity and quantum mechanics.

You could create extremely hard materials by manipulating the strong nuclear force over large distances, this would be extremely exotic by our standards but not outside the realm of possibility. Stuff like that is what the singularity would allow, is it impossible to comprehend? Not really.

3

Wroisu t1_j900ex1 wrote

It’s not claiming to know, it’s doing what any good science fiction does and extrapolates what we know to logical conclusions to create interesting narratives, and do commentary on the current social, technological & political climates etc.

The culture novels are known for that, don’t knock it until you’ve read it.

5

Wroisu t1_j8zzz3h wrote

I’m not confusing them, I know my definitions. I specified post-singularity because the books I gave recommendations for are based on the premise of humanoids being in a symbiotic relationship with hyper intelligent artificial intelligences called (Minds).

Post singularity implies post scarcity, of which we already are in some aspects (like food) we just don’t distribute it properly.

2

Wroisu t1_j8zx6o6 wrote

Why is it scary? If things go right you’ll have the free-time to learn and do as you please, with the resources to do them to their fullest extent.

In the meantime, learn all you can about these topics and how they relate to things as a whole, like medicine, climate change, civil rights, politics etc.

If you want books to read about these things, I recommend reading the book Look to Windward or Player of Games. They delve into what a post-singularity society might look like (under ideal circumstances)

2

Wroisu t1_j8zvra4 wrote

Bacteria did not create humans though, maybe they did in some abstract sense - but bacteria did not actively work to create humans.

A super-intelligence would most definitely retain interest in its progenitor species specifically due to the fact it was created by them.

The relationship would be more similar to a grand kid and their grandparents.

1

Wroisu t1_j8yredb wrote

"There was also the Argument of Increasing Decency, which basically stated that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good- behaviour-as-it-was-generally-understood,

i.e. not being cruel to others, was as profound as these matters ever got”.

This is the argument of increasing decency, it basically says that cruelty & petty violence is a result of stupidity. and that any genuine super intelligence would be benevolent by virtue of being super intelligent.

2

Wroisu t1_j6nvoya wrote

Read the culture by Ian m banks, specifically “Look to Windward” or “Surface Detail”… now, back in reality, how will society benefit from AGI? Eventually, the hope is that it will be able to do any human labor - freeing humans up to do whatever we like.

Besides automating jobs, having the ability to reason through mountains of data that would take humans centuries or millennia to reason through. This alone will help us advance much quicker in all fields, things like nuclear fusion, protein folding, agriculture etc.

AI is only doom and gloom under capitalism, really.

This video covers most talking points:

https://youtu.be/8nt3edWLgIg

11

Wroisu t1_j64p21w wrote

Hopefully something akin to Ian M. Banks “Argument of increasing decency” turns out to be true, which states:

“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”

11

Wroisu t1_j2pureb wrote

the argument I’d give in return is that it only appears locally flat (local as in the entire observable universe) because the total thing is much larger than 93 billion light years across. Like if your entire observable universe was Kansas, but you didn’t know Kansas was part of a globe.

The margin of error for positive curvature is 0.4% so… within the limits of things that are known and possible.

−2

Wroisu t1_j2p7dn0 wrote

Yes, but in the case that the universe is just the 3D surface of a hypersphere, it would also be expanding, expanding faster than you could move to come back all the way around again.

This is what Carl Sagan meant by “finite but unbounded”

1

Wroisu t1_j2p747l wrote

The 3D universe can be thought of as the surface of an expanding hypersphere. If the universe weren’t expanding, you could go all the way around and come back to where you started.

But since it’s expanding, you’ll never be able to move fast enough to come all the way back around again.

A “finite but unbounded universe”

−1