kalavala93

kalavala93 OP t1_j68nskz wrote

Excerpt from article:

"A common trope of science fiction is the depiction of nanobots, small robots moving in the body fixing wounds or healing diseases. Unfortunately, we will never be able to create these types of machines. The mechanisms inside a robot a few nanometers large will instantly melt together, while the small metallic arms and claws seen in science fiction would bend and stick to the surface of the particle."

^ can anyone with an interest in nanorobotics qualify this statement?

I kinda want little microbots fixing my wounds and keeping me young. Lol.

4

kalavala93 OP t1_j68ccbu wrote

I did. Some is always a compliment. One thing I read about is thr burden of knowledge is higher...like it takes more time for people to learn something but my problem with this take is I feel like good science is the ability to consolidate information..for example for us to be able to have a nuclear powered engine in an aircraft carrier we had to have a diesel engine which was born from a steam engine. I don't know anyone who makes steam engines anymore nor does someone need to learn how to make a steam engine in order to make a nuclear reactor engine. Isn't science about consolidating old science?

2

kalavala93 t1_j687cdg wrote

To me ai alignment means it at a minimum has to not kill us. The problem with getting it to agree with us is we can't even agree with each other. We don't even have a unified go on what ai alignment looks like...ai alignment in China could look like "help China, fight usa". That makes things very complicated.

1

kalavala93 t1_j66yhtf wrote

I'm being down voted because people don't like to hear negative things. I mean...this is the singularity subreddit. It's a subreddit who's purpose is reliant on an AGI bringing us there.

Suggesting the likely reality that AI is going to kill us ruins the singularity for everyone.

It's like when you tell Christians that their salvation is contingent on Christ coming back to redeem mankind but then telling them he's coming back to commit mass human genocide. Doesn't sit to well with them.

That said. I don't want AGI to do this, and I hope it doesn't. But AGI research is exploding and alignment research has gone NO WHERE meaningful at all. So yes it is likely AGI will kill us. But there is a chance it wont.

2