Stippes
Stippes t1_j4fmhfg wrote
Modernism in the philosophical sense has shown to be not accurate.
We, as humans, are neither the owners of our own developments nor our own decisions.
Transhumanism, in my eyes, is a movement to leave these beliefs behind.
Stippes t1_j46lmrr wrote
Reply to Should AI receive a salary by flaming_dortos
AI shouldn't receive a salary but pay an automation tax
Stippes t1_j1paxbg wrote
Behavioral "control"
Definitely not the next, but down the line, AI will be integrated more into nudges and other forms of behavioral economics. This will pose concrete problems for society at a larger scale than the Cambridge analytica one in 2016.
Stippes t1_ittn4nt wrote
Reply to With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
People got busy lives. So, they engage in strategic ignorance to ignore things that require a change in behavior until they have to.
Same holds for other important issues such as climate change or potential economic crashes.
It will remain our job to occasionally and nicely remind them that technology will take its space. Whether they are ready or not.
Stippes t1_it32tc1 wrote
Reply to Just for fun: which fictional world would you spend most of your Full Dive VR time in? by exioce
Yeah, same here. I'd like to define some basic premises and then see how they'd play out. It would be so utterly fascinating.
Stippes t1_ira5ehc wrote
Reply to Artificial General Intelligence is not a good thing (For us), change my mind by OneRedditAccount2000
I think it doesn't have to end in open conflict. There might be a Nash equilibrium outside of this. Maybe something akin to MAD or so. If an AI is about to go rogue in order to protect itself, it has to consider the possibility that it will be destroyed in the process. Therefore, preventing conflict might maximize its survival chances. Also, what if a solar storm hits earth in a vulnerable period? It might be safer to rely on organic life forms to cooperate. As an AI doesn't have agency in the sense that humans have it might see benefits in a resilient system that combines organic and synthetic intelligence.
I think an implicit assumption of yours is that humans and AI will have to be in competition. While that might be a thing for the immediate future, the long term development will be likely more one of assimilation.
Stippes t1_j90my8h wrote
Reply to I am a young teenager, and I have just learned about the concept of reaching singularity. What is the point of living anymore when this happens. by FriendlyDetective319
That's pretty much the same discussion that was had in philosophy in the early 20th century.
Since then, we've come from nihilism (your point of view) to existentialism (make your own point to stay alive) to absurdism (there isn't any point, but we can enjoy life despite that). (All this is very simplified)
Seek solace in the answers that were given before.