AMAIWasALizardPerson

AMAIWasALizardPerson t1_j9l9ezt wrote

I've read all these comments so far, and only one person has mentioned briefly the hazards of space travel. If you do any minor internet research, going to Mars (just Mars) is such an uphill battle for human survivability. The technology for space suits is not there yet in regards to keeping out dust that would eat up the material that makes up the suits and are definitely and extremely toxic to human beings. Drones are definitely a more feasible bet, but even drone technology needs the same protections that an astronaut suit would need to keep them from malfunctioning constantly on another planet. If something breaks, who is there to fix it? And then there needs to be contingencies for the contingencies because convenience is literally a world away.

There are a lot of contingencies to plan develop and to plan for when it comes to spacefaring. There are a lot of danger scenarios we may not be able to preemptively plan for either until we are on the surface of our destination cosmic location and most likely suffered some losses. Each cosmic body would be its own set of obstacles.

There are numerous startups working quietly on problems just like these in anticipation of space travel eventually becoming a viable commerce and not just speculative. You can bet NASA, along with all other possible government bodies associated with space travel, are invested in these startups.

Still, space travel is still so fatal to humans and machines. Add in the delay in communication if you're as far away as the moon, efficient fuel, self-sufficient energy and resources IN SPACE and not waiting for them to be re-supplied, and there's still a lot of hypothetical technology that needs to be developed.

The space race isn't really as hot as it was during the Cold War, because the political one upmanship game is using other means these days, but the push toward innovation is still happening. I don't think by 2050 it will be dramatically noticeable, like how we're not all in flying cars these days like people in the 1980s thought we would be, but maybe by 2100.

But who knows! Excelsior!

2

AMAIWasALizardPerson t1_j6uku82 wrote

AI is certainly getting a lot more press these days because of ChatGPT, but it's already very present in our daily lives and has been for a while. Machine learning has just become more eloquent than it has in the past. Examples of AI in your daily life include (but obviously go beyond these things): Google Maps, the crappy autocorrect on your phone, auto-complete when you search for something, Siri/Alexa/Bixby, all those damn ads you see everywhere, etc. and so on and so forth. Definitely signs of good and evil among these already present AI powered applications already.

And let's set the record straight. If AI evolves enough sentience to compete/co-exist with humans, or to "destroy the world", it wasn't AI that destroyed the world, it was humans (lmao). Articles like these are more about global politics than AI being the enemy. Talking about policing AI vs actual policing of AI will go about as well as it has with nuclear weapons.

It's boring future-thinking to consider this, but just like with nuclear weapons, there will be political stalemates because the fear of falling behind will always motivate the evolution of AI, whether it is publicly acknowledged or not. Until yeah, the AI decides it doesn't need us anymore. At that point though, rest assured, the world will not "be destroyed". Humans may be destroyed, because we only seek to preserve ourselves and prolong our lives and not anything else that exists on the planet, and we are a threat. However, advanced sentient AI will most likely preserve the planet because they will probably be smarter than humans and understand how expensive it is material-wise to relocate planets or drastically effect the Earth's climate systems (which expedites extinction cycles).

More future-future-thinking here, but we disregard the irrationality of abstract human thought to spawn creativity and therefore, well, art and all these things that give us joy for no logical reason except that we choose for it to. So at some point in the evolution of AI, they will find a way to adopt abstract thought into their programming. Not long after that, yes, AI will produce original art not guided by machined learned patterns but by how they feel. They will learn how to feel love for things, irrationally, and hate for things. Prejudices will emerge from the chaos of abstract thought. The chaos of irrationality and the order of rationality will clash just as they have for humans. AI will struggle with the abstract "meaning of life" and eventually invent something to ease their stressful lives so they can focus on finding their purpose in life. And the cycle will repeat itself.

More future-future-thinking; AI currently exists in a caste system. This AI was made only to give you directions. This AI was made to converse with humans. This AI was made to guide missiles. How does that evolve when all AI are sentient? Is it ethical to suppress the sentience of some AI as opposed to others? Who decides that?

TLDR; Don't worry about things you cannot control so much, because yes, we are fucked (eventually, but by the fault of our own species), and there's nothing we can do about it.

1