VirtualHat t1_j9rsysw wrote
I work in AI research, and I see many of the points EY makes here in section A as valid reasons for concern. They are not 'valid' in the sense that they must be true, but valid in that they are plausible.
For example, he says, We can't just build a very weak system. There are two papers that led me to believe this could be the case. All Else Being Equal be Empowered, which shows that any agent acting to achieve a goal under uncertainty will need (all else being equal) to maximize its control over the system. And the Zero Shot Learners paper which shows that (very large) models trained on one task seem also to learn other tasks (or at least learn how to learn them). Both of these papers make me question the assumption that a model trained to learn one 'weak' task won't also learn more general capabilities.
Where I think I disagree is on the likely scale of the consequences. "We're all going to die" is an unlikely outcome. Most likely the upheaval caused by AGI will be similar to previous upheavals in scale, and I'm yet to see a strong argument that bad outcomes will be unrecoverable.
Jinoc t1_j9ub6f1 wrote
What makes an extinction-level event unlikely in your view if you do believe advanced models will act so as to maximise control? Is it that you don’t believe in the capabilities of such a model?
VirtualHat t1_j9vkpgd wrote
That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.
- Extinction-level events must be rare, as one has not occurred in a very long time.
- Therefore the 'base' risk is very low, and I need evidence to convince me otherwise.
- I'm yet to see strong evidence that AI will lead to an extinction-level event.
I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.
I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.
Scyther99 t1_j9zomj7 wrote
First point is like saying phishing was nonexistent before we invented computers and internet, so we dont have to worry about it once we invent them. There have been no AGI. There have been no comparable events. Basing it on fact that asteroid killing all life on earth is unlikely does not make sense.
Smallpaul t1_ja6orxv wrote
> occasionally beat a much stronger player
We might occasionally win a battle against SkyNet? I actually don't understand how this is comforting at all.
> The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes.
I might win a single game against a Poker World Champion, but if we play every day for a week, the chances of me winning are infinitesimal. I still don't see this as very comforting.
[deleted] t1_j9s5hf7 wrote
[deleted]
ErinBLAMovich t1_j9snb17 wrote
Maybe when an actual expert tells you you're overreacting, you should listen.
Are you seriously arguing that the modern world is somehow corrupted by some magical unified "postmodern philosophy"? We live in the most peaceful time in recorded history. Read "Factfulness" for exact figures. And while you're at it, actually read "Black Swan" instead of throwing that term around because you clearly need to a lesson on measuring probability.
If you think AI will be destructive, outline some plausible and SPECIFIC scenarios how this could possibly happen, instead of your vague allusions to philosophy with no proof of causality. We could then debate the likelihood of each scenario.
[deleted] t1_j9tl7kx wrote
[deleted]
perspectiveiskey t1_j9s8578 wrote
> It's amazing to me how easily the scale of the threat is dismissed by you after you acknowledge the concerns.
I second this.
Also, the effects of misaligned AI can entirely be mediated by so called meat-space: an AI can sow astonishing havoc by simply damaging our ability to know what is true.
In fact, I find this to be the biggest danger of all. We already have a scientific publishing "problem" in that we have arrived at an era of diminishing returns and extreme specialization, I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I just watched this today where he talks about using automated code generation for code verification and tests. The man is brilliant and the field is brilliant but one thing is certain and that is that the complexity of far exceed individual humans' ability to fully comprehend.
Now combine that with this and you have a true recipe for disaster.
VioletCrow t1_j9smth5 wrote
> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.
governingsalmon t1_j9svhv8 wrote
I agree that we don’t necessarily need AI for nefarious actors to spread scientific misinformation, but I do think AI introduces another tool or weapon that could used by the Andrew Wakefields of the future in a way that might pose unique dangers to public health and public trust in scientific institutions.
I’m not sure whether it was malevolence or incompetence that has mostly contributed to vaccine misinformation, but if one intentionally sought to produce fake but convincing scientific-seeming work, wouldn’t something like a generative language model allow them to do so at a massively higher scale with little knowledge of a specific field?
I’ve been wondering what would happen if someone flooded a set of journals with hundreds of AI-written manuscripts without any real underlying data. One could even have all the results support a given narrative. Journals might develop intelligent ways of counteracting this but it might pose a unique problem in the future.
perspectiveiskey t1_j9u1r9n wrote
AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.
There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.
The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...
Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.
This is to say nothing of much more insidious and clever intrusions into our thought institutions.
terath t1_j9sd368 wrote
This is already happening but the problem is humans not ai. Even without ai we are descending into an era of misinformation.
gt33m t1_j9ui6id wrote
This is eerily similar to the “guns don’t kill people” argument.
It should be undeniable that AI provides a next-generation tool to lower the cost of disruption for nefarious actors. That disruption can come in various forms - disinformation, cyber crime, fraud etc.
terath t1_j9x6v7k wrote
My point is that you don’t need ai to hire a hundred people to manually spread propaganda. That’s been going on now for a few years. AI makes it cheaper yes but banning AI or restricting it in no way fixes it.
People are very enamoured with AI but seem to ignore the already many existing technological tools being used to disrupt things today.
gt33m t1_j9xapzz wrote
Like I said this is similar to the guns argument. Banning guns does not stop people from Killing each other but easy access to guns amplifies the problem.
AI as a tool of automation is a force multiplier that is going to be indistinguishable from human action.
terath t1_j9xdc0i wrote
AI has a great many positive uses. Guns not so much. It’s not a good comparison. Nuclear technology might be better, and I’m not for banning nuclear either.
gt33m t1_j9xfxid wrote
Not certain where banning AI came into the discussion. It’s just not going to happen and I don’t see anyone proposing it. However, it shouldn’t be the other extreme either where everyone is running a nuclear plant in their backyard.
To draw parallels from your example, AI needs a lot of regulation, industry standards and careful handling. The current technology is still immature but if the right structures are not put in place now, it will be too late to put the genie back in the bottle later.
perspectiveiskey t1_j9u2auz wrote
I don't want to wax philosophical, but dying is the realm of humans. Death is the ultimate "danger of AI", and it will always require humans.
AI can't be dangerous on Venus.
terath t1_j9u4o7b wrote
If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.
Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.
If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.
perspectiveiskey t1_j9u6u27 wrote
Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?
What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?
Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.
Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.
AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.
WarAndGeese t1_j9sj481 wrote
I agree about the callousness, and that's without artificial intelligence too. The global power balances were shifting at times of rapid technological development, and that development created control vacuums and conflicts that were resolved by war. If we learn from history we can plan for it and prevent it, but the same types of fundamental underlying shifts are being made now. We can say that international global financial incentives act to prevent worldwide conflict, but that only goes so far. All of the things I'm saying are on the trajectory without neural networks as well, they are just one of the many rapid shifts in political economy and productive efficiency.
In the same way that people were geared up at the start of the Russian invasion to Ukraine to try to prevent nuclear war, we should all be vigilant to try to globally dimilitarize and democratise to prevent any war. The global nuclear threat isn't even over and it's regressing.
HINDBRAIN t1_j9sthbq wrote
"Your discarded toenail could turn into Keratinator, Devourer of Worlds, and end all life in the galaxy. We need agencies and funding to regulate toenails."
"That's stupid, and very unlikely."
"You are dismissing the scale of the threat!"
soricellia t1_j9tn2xi wrote
I don't even think this is a strawman mate you've mischaracterized me so badly it's basically ad hominem.
HINDBRAIN t1_j9tnkfa wrote
You're basically a doomsday cultist, just hiding it behind Sci-Fi language. "The scale of the threat" is irrelevant if the probability of it happening is infinestimal.
soricellia t1_j9tomaw wrote
Well I think that entirely depends on what the threat is mate. The probability of AGI rising up terminator style I agree seems pretty small. The probability of disaster due to the inability of humans to distinguish true from false and fact from fiction being exasperated due to AI? That seems much higher. Also, I don't think either of us have a formula for this risk, so I think saying the probability of an event happening is infinitesimal is intellectual fraud.
Viewing a single comment thread. View all comments