Submitted by Desi___Gigachad t3_11s53pv in singularity
Darth-D2 t1_jcddzyg wrote
Thank you for bringing this topic to the discussion. However, I think your post misses some crucial points (or does not highlight them enough).
To reiterate the definition that you have posted yourself: "[...] Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization."
The majority of active users of this subreddit seem to (1) neither see any risk associated with developing potentially unaligned AI nor (2) do they think that we can do anything about it, so we shouldn't care.
To steelman their view, most Redditors here seem to think that we should achieve the singularity as quickly as possible no matter what because postponing the singularity just prolongs existing suffering that we could supposedly easily solve once we get closer to the singularity. In their view, being concerned about safety risks may postpone this step (this is referred to as the alignment tax among AI safety researchers).
However, a significant proportion of prominent AI researchers are trying to tell the world that AI alignment should be one of our top priorities in the next years. It is consensus among AI safety researchers that this will be likely extremely difficult to get right.
Instead of engaging with this view in a rational informed way, any safety concerns expressed on this sub are just being categorized as "doomersim" and people who are quite educated on this topic are dismissed as being afraid of change/technologies (ironically, those who are concerned are often working on the cutting edge of the technologies and embrace technological changes). To dismiss the concerns as "having a negative knee-jerk reaction by default whenever a development happens" is just irresponsible in my opinion and completely misses the point.
While not everyone can actively work on technical AI Alignment research, it is important that the general public is educated about the potential risks, so that society can push for more effective regulations to ensure that we indeed have a safe realization of advancing AI.
Robert Miles has a really good video about common reactions about AI safety:_https://www.youtube.com/watch?v=9i1WlcCudpU&ab_channel=RobertMiles
EDIT: If someone is new to this topic and shows that they are scared, what are better reactions than calling it doomerism? Direct them to organizations like the ones in the sidebar of this sub so that they can see how others are working on making sure that AI has a positive impact on humanity.
Viewing a single comment thread. View all comments