Italiancrazybread1

Italiancrazybread1 t1_j9txjdp wrote

You technically can pick out very low signals from the background noise if the signal is repeated continuously, or for at least a long enough time that the receiver gets all the information from the signal.

This is how we are able to receive signals from the voyager probes from so far away. The probes repeat their signal many times because here on Earth we likely won't get the full message the first time. Every time the message gets sent, we decipher more and more of the message.

2

Italiancrazybread1 t1_j7u09i4 wrote

AI will be eventually be treated the same way nuclear threats were.

Numerous countries will be doing intense research, there will be an incident or two here or there where people die, ultimately one country might use it on another country to inflict thousands or even hundreds of thousands of casualties. Then most countries will quickly move to banning and heavily regulating it, all network traffic will be monitored for signs of AI, countries will develop defensive tools to combat AI (AI designed to hunt down and destroy other AI,), countries will be heavily disincentivized from developing, and their development/deployment may even incite military action against another country for it.

I do believe that countries will still allow certain types of specialized AI to run certain tasks that are better handled by computers.

2

Italiancrazybread1 t1_j4bkyj0 wrote

One thing's for sure, they're going to microtransaction the hell out of programmable matter when it comes to market.

I think programmable matter will come inside a box in liquid form. You will type in what you want it to form on the outside of the box (or choose from a list) and it will form what you want one time. If you want 10 chairs, you have to buy 10 boxes of them. If you want it to be reprogrammable and turn into additional items, then you have to pay for the subscription. And you gotta pay extra for all the bells and whistles.

*Heated seats and remote start come extra

1

Italiancrazybread1 t1_it8r2qn wrote

>Anything that changed the international order enough to support international institutions with real authority with respect to existential risk would likely have to be a devastating catastrophe in its own right. It seems unlikely we’ll make it to the path of “existential security” without taking some serious risks — which hopefully we survive to learn from.

This is exactly what's going to happen

2