gaudiocomplex

gaudiocomplex OP t1_jd6fqz0 wrote

What we're debating, considering on one hand market forces and the other, dangerous results (that come with them PR nightmare, severe public/political blowback) is how much of GPT4's capabilities today is the general public privy to? You're saying 100 percent?

2

gaudiocomplex t1_j9nckqr wrote

Reply to comment by iNstein in Can someone fill me in? by [deleted]

The thing is, given what we know, there are no indications yet that it would see us as benign. If anything, it would see us as a credible threat to its autonomy and want to rid itself of us. That's the more likely scenario, if we don't get alignment right the first time.

2

gaudiocomplex t1_j9czm28 wrote

This is a SPECTACULARLY terrible take. Maybe not #3 but the rest is so bad. 😂

OP: you're talking about AI alignment and yes, currently there's no way to prevent AI from killing us all if we were to develop AGI. The AI community talks a lot about this at lesswrong.com. I recommend going there instead of listening to idiots here.

Here's a fun one

Favorite part:

>"The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".

−3

gaudiocomplex t1_j63gxkg wrote

I think yes, you can forget about a reliable source of income. The downward economic force this thing will have on the writing and visual art communities will make jobs scarce and pay shit. But there will still be ways to reapply these skills indirectly. Physical artists might have the last bastion of hope given how much robotics are lagging compared to the language models. Ultimately though, no job is safe.

7

gaudiocomplex t1_j63fyf1 wrote

If you're interested in the alignment debate, it gets far, far more nuanced than this... And perfect human cooperation is a pipedream. There will still always be somebody who has very little to lose and a lot to win who is willing to take the gamble.

Lesswrong.com has a lot on this, including the odd/ interesting notion that the first to get AGI should find ways to prevent others from attaining AGI.

48