ttkciar

ttkciar t1_jd101o4 wrote

I will add to this that the lifespan of a civilization after its industrial revolution might be quite short.

Our own industrial revolution has given rise to two existential crises so far -- the threat of global thermonuclear war, and the threat of climate change. The first seems to be behind us, mostly, maybe, but there were some really close calls during the Cold War. We came this || close to going out forever. The second has yet to fully play out.

Those are just the existential crises which have emerged in the 260 years of industrialization, which in the cosmological timeframe is less than the blink of an eye. If we survive this one, there will doubtless be more.

For all we know, all civilizations in the galaxy follow the pattern of a long pre-industrial existence (3.4 million years, in our case), followed by a very short industrial period, ending in annihilation.

If that's typical, then technologically advanced civilizations would only account for about 0.008% of all alien civilizations.

1

ttkciar t1_jag8318 wrote

Sheer volume of intellectual work.

It takes tens of thousands of high-end compute-intensive servers to match the state change rate of an adult human brain.

If that's proportional to capacity for intellectual work, it would take low hundreds of trillions of servers to match the work capacity of the human race.

There aren't trillions of servers. There aren't even billions.

1

ttkciar t1_jaae8nc wrote

Artificial General Intelligence, or AI which thinks about the world in a general way and solves problems with a cognitive process analogous to our own.

https://wikipedia.org/wiki/Artificial_general_intelligence

Large language models like ChatGPT are "narrow AI". They are statistics engines operating upon word sequences, and are not capable of understanding the world nor have anything remotely resembling cognitive processes.

https://wikipedia.org/wiki/Language_model

Cognitive scientists currently lack a sufficiently complete theory of intelligence for us to design AGI. Work is ongoing, but there's no way to predict when or if the relevant gaps in cognitive theory will be filled.

4

ttkciar t1_j9s68m4 wrote

"The Matrix", like Catspaw129 said.

Early in the movie, when the gang was getting ready to yank Neo out of the matrix, Switch calls him "coppertop". What that meant wasn't explained until later, when Morpheus held up a Duracell battery, but he didn't make the connection explicit. The script depended on the audience knowing that Duracells were colloquially known as "coppertops".

Edited: I thought it was Cypher who called him "coppertop", but Catspaw129 is right, it was Switch.

1

ttkciar t1_j9s3n4o wrote

People are naturally ignorant and do not know how to think well.

Filling their heads with well-integrated knowledge and inculcating habits of effective thinking requires education, a process which effectively engraves new neural pathways and remakes the child into something else.

Right now the most effective methods we have for that involves repeated mental exercises. For very young children this can be easy, because their minds are still extremely plastic, but as children grow older it grows increasingly painful. Reforging one's brain into something it's not is sharply at odds with our instinct for self-preservation. Once we have become a person, we want to continue being that person. But education remakes us into someone else -- someone better than we were.

Nobody is born with the self-discipline they need to make that happen. Part of the educational process is inculcating that self-discipline. While it is being learned, a teacher must hold them to task to make up for the lack.

If AI is to solve the problem of people being stupid bags of mostly water, it needs to identify the students' points of cognitive weakness, provide them with instruction and exercises which strengthen those points, and hold them to task practicing those exercises over and over and over until weakness becomes strength.

To prevent the student from simply walking away from the AI tutor, the AI tutor would need to hold some kind of leverage over the student, so that walking away is more painful than performing the educational exercises.

This is treading dreadfully close to dystopian AI-apocalypse territory, but that's just an illustration of how nightmarish the educational process can be. If irons in the forge had mouths, they would certainly scream as they are beaten into steel with hammer and fire. So are schools a crucible for transforming students.

Make no mistake, when we talk about AI solving this problem, we are talking about giving AI our children and a forge.

7

ttkciar t1_j9rjahf wrote

Having access to knowledge is not the same thing as being educated.

Education not only causes knowledge to be retained; it also causes it to be integrated together, so that we can view the world in a cross-disciplinary way, and informs our behavior in ways both profound and mundane.

For example, when you see someone tailgating in traffic, you know they slept through their math and physics classes. No amount of information on tap will change their stupid behavior the way education could.

Even with all the information made available to the public about the coronavirus, people still act in stupendous ignorance of the consequences of their bad behavior, and spread the disease in entirely avoidable ways.

My hope is that LLMs like ChatGPT can be integrated into virtual tutors, so that students can get better individualized attention and receive the education we all need them to have.

6

ttkciar t1_j5vy5oz wrote

> Does anyone know how large these would be compared to a civilian energy reactor?

Tiny. A civilian energy reactor has to implement two heat exchange systems -- one for transferring heat from the core, and one for heating water to steam to turn turbines and then condense it again.

For NTP there are no circular heat exchanges, and no turbines. It's just a hot core in your reaction chamber, which heats the hydrogen you squirt on it, and the hot hydrogen gas escapes out the rocket nozzle.

The smallest critical mass of plutonium is about four inches across. In theory that's all you need in the reaction chamber, but in practice you will also want cladding so that your hydrogen reaction mass erodes the cladding and not the plutonium (else you'll be squirting plutonium out the rocket nozzle along with your hydrogen), and a bisecting neutron reflector shutter or something so you can turn the core on and off.

So, maybe something about twelve inches across? Still much smaller than a civilian power reactor.

1

ttkciar t1_j5pibl4 wrote

Yes and no.

The Archive is very responsive to content owners requesting that their property is removed from the site (either by sending a request to The Archive or by modifying their own robots.txt; the Wayback Machine checks the live robots.txt before serving up old content and uses it as a filter).

On the other hand, there's no law forbidding companies or individuals from keeping a local-only copy of others' IP for non-public use, so IA doesn't actually delete content. They "darken" it instead, which means it's still in the data cluster back-end but inaccessible from the outside.

32

ttkciar t1_j3b8bph wrote

In transcranial stimulation (for example rTMS) neurons are artificially stimulated into firing frequently, which reduces their firing threshold over time.

This renders those neurons more likely to fire more frequently for some time (several months to a year, in the case of rTMS).

This is used to treat problems like executive dysfunction, by increasing activity in the dorsolateral prefrontal cortex. Stimulating other regions helps with other problems.

4

ttkciar t1_j35gt9s wrote

If the gaskets are made of natural latex rubber (without intensive cross-linking as seen in automobile tires), then they will grow brittle and crack if they dry out too much.

Gaskets are traditionally made from natural latex rubber, but nowadays PU and PVC are frequently used, which are not susceptible to drying out but can grow brittle eventually (several years) anyway.

15

ttkciar t1_j10znhn wrote

Relatedly, this prompted me to look up ground-to-ground anti-radiation missile systems, and afaict the only such systems that exist are China's B-611MR and Israel's Keres (a ground launching system for AGM-78).

That seems like a pretty grievous oversight. Without a ground-to-ground anti-radiation capability, what are a military's options for attacking S-400 other than overrunning it with tanks or getting close enough with a sufficiently well-stealthed aircraft to launch air-to-ground anti-radiation missiles?

2