Darustc4
Darustc4 OP t1_je9ntit wrote
Reply to comment by huskysoul in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
And how do you propose one does that? Making SOTA LLMs (or AGIS for that matter) requires an absolute fuckload of money, and only top elites and goverments have access to that kind of money and influence.
Darustc4 OP t1_je9m2ce wrote
Reply to comment by SkyeandJett in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
I don't consider myself part of the EY cult, but I must admit that AI progress is getting out of hand and we really do NOT have a plan. Creating a super-intelligent entity with fingers in all pies in the world, and humans having absolutely no control over it, is straight up crazy to me. It could end up working out somehow, but it can also very well devolve in the complete destruction of society.
Submitted by Darustc4 t3_126lncd in singularity
Darustc4 t1_ja0lp3d wrote
Reply to do you know what the "singularity" is? by innovate_rye
Results will not reflect reality since many will vote 'yes' even without knowing the real meaning (See all the posts asking about what comes after the singularity).
Darustc4 t1_j9qp8wf wrote
Reply to And Yet It Understands by calbhollo
"There is infinite demand for deeply credentialed experts who will tell you that everything is fine, that machines can’t think, that humans are and always will be at the apex, people so commited to human chauvinism they will soon start denying their own sentience because their brains are made of flesh and not Chomsky production rules. All that’s left of the denialist view is pride and vanity. And vanity will bury us."
Holy shit.
Darustc4 t1_j9p21rt wrote
Reply to comment by hapliniste in If only you knew how bad things really are by Yuli-Ban
IMO it is the best thing to do. Promote fear of AI so that the people realize it is dangerous and we buy some time to get alignment work in.
I am an AI safety researcher and let me tell you, it's not looking great: AI is getting stupidly powerful incredibly quick and we are nowhere close to getting them to be safe/aligned.
Darustc4 t1_j9owvhx wrote
Reply to comment by GenoHuman in If only you knew how bad things really are by Yuli-Ban
Why? Why are you sure he is wrong/cringe/misguided?
Darustc4 t1_j8rtck2 wrote
Reply to comment by MrSheevPalpatine in Emerging Behaviour by SirDidymus
That's fair, but I don't see the point of saying AIs are making stuff up then, when humans are not that different in that regard. It seems a bit of a moot point.
Darustc4 t1_j8rpldc wrote
Reply to comment by AwesomeDragon97 in Emerging Behaviour by SirDidymus
Oh yeah, everything is says is *so* made up people find it hard to discern stuff written by an AI and a human. I think you're either putting too little credit into what AI does, or putting way too much credit on human capabilities.
When a human expert fucks it up, gets cocky, tries to alter sources and is confirmation biased, do you also say: Yeah, this is simply one of the many flaws of humans: everything they say is made up.
Darustc4 t1_j8r62ho wrote
Reply to comment by chrisjinna in Bingchat is a sign we are losing control early by Dawnof_thefaithful
To me, this reads like: "The only real kind of understanding is human-like understanding, token prediction doesn't count because we believe humans don't do that."
If it is effective, why do you care about how the brain of an AI operates? Will you still be claiming they are not understanding in the real way when they start causing real harm to society and surpassing us in every field?
Darustc4 t1_j8f9oi7 wrote
Reply to comment by FusionRocketsPlease in Altman vs. Yudkowsky outlook by kdun19ham
Optimality is the tiger, and agents are its teeth:
Darustc4 OP t1_jedw3g4 wrote
Reply to comment by Alternative_Fig3039 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
AI does not hate you, nor does it like you, but you're made out of atoms it can use for something else. Given an AI that maximizes for some metric (dumb example: an AI that wants to make the most paperclips in existence), it will certainly develop various convergent properties such as: self-preservation that won't let you turn it off, a will to improve itself to make even more paperclips, ambitious resource acquisition by any and all means to make even more paperclips, etc... (see instrumental convergence for more details).
As for how it can kill us if it wanted to, or if we got in the way, or if we turn out to be more useful dead than alive: Hack nuclear launch facilities, political manipulation, infrastructure sabotage, key figure assasination, protein folding to create a deadly virus or nanomachine, etc....
Killing humanity is not hard for an ASI. But do not panic, just spread the word that building strong AI might be unwise when unprepared, and be ready to be pushed back by blind optimists that believe all of these problems will disappear magically at some point along the way to ASI.