Submitted by kdun19ham t3_111jahr in singularity

Sam Altman recently credited Eliezer Yudkowsky for his contributions to the AI community, yet Yudkowsky regularly expresses we’ve failed in alignment and humans will be dead within 10 years.

Altman has a much rosier picture of AI creating massive wealth and a utopia like world for future generations.

Do they both have sound arguments? Has Altman ever commented on Yudkowsky’s pessimism? Is one viewed as more credible in the AI community?

Asking as a member of the general public who terrifyingly happened upon Yudkowsky doom articles/posts.

38

Comments

You must log in or register to comment.

ThirdFloorNorth t1_j8ffu9s wrote

Eliezer Yudkowsky is a prominent transhumanist that I disagree with pretty much every single opinion he has ever espoused, yes somehow we are both still transhumanists. His views on, and response to, Roko's Basilisk in particular are fucking embarrassing.

So I'm gonna go with Altman.

In the end, in won't matter either way. Either Altman is right, and we will get a benevolent AI, or Yudkowsky is right, and we're capital-F Fucked.

Either way, AI is coming. All we can do is wait and see.

28

TemetN t1_j8gmwj1 wrote

To be fair, Yudhowsky's argument on Pascal's mugging was actually interesting (particularly vis a vis his own writings funnily enough), but yes I very much consider him someone you have to sort through the writings of due to his focus on foom and... well, pessimism is an understatement, but I hesitate to call him a doomer since most of them don't even have coherent arguments.

​

Altman is still something of a hypeman though, and it is worth noting that both of them have argued in favor of very agressive AI timelines, which has been generally more towards where things have actually occurred as compared to the preponderance of people expecting ridiculously slow progress.

10

gay_manta_ray t1_j8h0ys4 wrote

personally, i really dislike any serious risk consideration when it comes to thought experiments like pascal's mugging in regards to any superintelligent ai. it has always seemed to me like there is something very wrong with assuming both superintelligence, but also some kind of hyper-rationality that goes far outside of the bounds of pragmatism when it comes to maximizing utility. assuming they're also superintelligent, but also somehow naive enough to have no upper bounds on any sort of utility consideration, is just stupid. i don't know what yudhowsky's argument was though, if you could link it i'd like to give it a read.

8

TemetN t1_j8h21sz wrote

Reasonable. Honestly, I more found the premise interesting than the application, but it sounds like you've at least read one of the discussions about it. If not, here's the original (you can get to some of the others through the topic links up top).

Pascal's Mugging

4

bildramer t1_j8htdli wrote

It's not about naïvete. It's about the orthogonality thesis. You can combine any utility function with any level of intelligence. You can be really smart but care only about something humans would consider "dumb". There's no fundamental obstacle there.

1

mouserat_hat t1_j8fwu6k wrote

Regarding Roko’s Basilisk: was he attacking non-wall creatures?

4

jamesj t1_j8fi5il wrote

Yudkowsky has a lot more detailed text to review with specific opinions, so he's easier to evaluate. I tend toward optimism (I'm also a silicon valley tech CEO) and I think Yudkowsky is a bit extreme, but it isn't at all clear to me that he's entirely wrong. I think we are on a dangerous path and I hope the few teams at the forefront of AI research can navigate it on our behalf.

22

SoylentRox t1_j8fyxct wrote

Have you considered that delaying AGI also has an immense cost?

Each year, the world loses 0.84% of everyone alive.

So if delay AGI by 1 year reduces the chance of humanity dying by 0.5%, for example, it's not worth the cost. This is because 0.84% extra people have to die while more AGI safety work is done who wouldn't have died if more advances in medicine and nanotechnology were available 1 year sooner, and the expected value an extra 0.5% chance of humanity wiped out is not enough gain.

(since "humanity wiped out" is what happens whenever any human dies, from their perspective)

Note this is true even if it takes 100 years from AGI -> (aging meds, nanotechnology) because it's still 1 year sooner.

15

BigZaddyZ3 t1_j8gey2c wrote

This doesn’t really make sense to me. If delaying AGI by a year reduces the chance of humanity in it’s entirety dying out by even 0.01%, it’d be worth that time and more. 0.84% is practically the cost of nothing if it means keeping the entire human race from extinction. You’re comment is illogical unless you somehow believe that every person alive today is supposed to live to see AGI one day. That was never gonna happen anyways. And even from a humanitarian point of view what you’re saying doesn’t really add up. Because if rushing AI results in 100% (or even 50%) of humanity being wiped out, the extra 0.84% of lives you were trying to save mean nothing at that point anyways.

12

Frumpagumpus t1_j8gqw8n wrote

> If delaying AGI by a year reduces the chance of humanity in it’s entirety dying out by even 0.01%, it’d be worth that time and more

my take: delaying agi by a year increases the chance humanity will wipe itself out preventing AGI from happening, whose potential value greatly exceeds that of humanity

7

SoylentRox t1_j8gf381 wrote

It makes perfect sense you just are valuing outcomes you may not live to witness.

2

BigZaddyZ3 t1_j8ggt7j wrote

No, it truly doesn’t… you’re basically saying that we should risk 100% of humanity being wiped out in order to possibly save the 0.84% humans who are gonna die of completely natural causes..

2

SoylentRox t1_j8ghast wrote

I am saying it's an acceptable risk to take a 0.5 percent chance of being wiped out if it lets us completely eliminate natural causes deaths for humans 1 year earlier.

Which is going to happen. Someone will cure aging. (Assuming humans are still alive and still able to accomplish things) But to do it probably requires beyond human ability.

2

BigZaddyZ3 t1_j8giqee wrote

But again, if a mis-aligned AGI wipes out humanity as a whole, curing aging is then rendered irrelevant… So it’s actually not worth the risk logically. (And aging, is far from the only cause of death btw).

3

SoylentRox t1_j8gj6go wrote

It's the cause of 90 percent of deaths. But obviously I implicitly meant treatment for all non instant death, and rapid development of cortical stacks or similar mind copying technology to at least prevent friends and loved ones from missing those killed instantly.

And again, I said relative risk. I would be willing to accept an increase of risk of all of humanity dying up to a 0.80 percent increased chance of it meant AGI 1 year sooner. 10 years sooner? 8 percent extra risk is acceptable and so on.

Note I consider both humans dying "natural" and a superior intelligence killing everyone "natural" so all that matters is the risk.

1

BigZaddyZ3 t1_j8gjv3o wrote

What if AGI isn’t a panacea for human life like you seem to assume it is ? What if AGI actually marks the end of the human experiment? You seem to be under the assumption that AGI automatically = utopia for humanity. It doesn’t. I mean yeah, it could, but there’s just much chance that it could create a dystopia as well. If rushing is the thing that leads us to a Dystopia instead, will it still be worth it?

5

SoylentRox t1_j8gk6d1 wrote

How dystopic? An unfair world but everyone gets universal health care and food and so on? But it's not super great, it's like the videogames with lots of habitation pods and nutrient paste? Or S risk?

Note I don't "think it is". I know there a range of good and bad outcomes, and "we all die" or "we live but are tortured" fit in that area of "bad outcomes". I am just explaining the percentage of bad outcomes that would be acceptable.

Delaying things until the bad outcome risk is 0 is also a bad outcome.

1

BigZaddyZ3 t1_j8gl7j2 wrote

>>Delaying things until the bad outcome risk is 0 is also a bad outcome.

Lmao what?.. That isn’t remotely true actually. That’s basically like saying “double-checking to make sure things don’t go wrong will make things go wrong”. Uh, I’m not sure I see the logic there. But it’s clear that you aren’t gonna change your mind on this so, whatever. Agree to disagree.

3

SoylentRox t1_j8gx69g wrote

Right. I know I am correct and simply don't think you have a valid point of view.

Anyways it doesn't matter. Neither of us control this. What is REALLY going to happen is an accelerating race, where AGI gets built basically the first moment it's possible at all. And this may turn into outright warfare. Easiest way to deal with hostile AI is to build your own controllable AI and bomb it.

0

BigZaddyZ3 t1_j8gy1hz wrote

>>Right. I know I am correct and simply don't think you have a valid point of view.

Lol nice try pal.. but I’m afraid you’re mistaken.

>>Anyways it doesn't matter. Neither of us control this. What is REALLY going to happen is an accelerating race, where AGI gets built basically the first moment it's possible at all. And this may turn into outright warfare. Easiest way to deal with hostile AI is to build your own controllable AI and bomb it.

Finally, something we can agree on at least.

2

SoylentRox t1_j8gydo4 wrote

>Finally, something we can agree on at least.

Yeah. It's quite grim actually if you think about what even just sorta useful AGI would allow you to do. By "sorta useful" I mean "good enough to automate jobs that ordinary people do, but not everything". So mining and trucking and manufacturing and so on.

It would be revolutionary. For warfare. Because the reason you can't win a world war is you can't dig enough bunkers for your entire population to be housed in separated bunkers, limiting the damage any 1 nuke can do, and build enough antimissile systems to prevent most of the nuclear bombardment from getting through.

And then well you fight the whole world. And win. "merely" AI able to do ordinary people tasks gives you essentially exponential amounts of production capacity. You're limited by how much land you have for an entire country covered in factories.

Note by "you" I don't mean necessarily the USA. With weapons like this, anyone can be a winner.

2

jamesj t1_j8fzlwr wrote

I don't think it is possible to delay it. If it is dangerous, I can mostly just hope for the best.

6

Baturinsky t1_j8ivn54 wrote

Is 1 person dying more important than 1000...many zeroes..000 persons not being born because humanity is completely destroyed and future generations from now until end of space and time will be never born?

1

SoylentRox t1_j8j3aql wrote

The argument is there is no difference from the perspective of that person.

This actually means if old people have the most power and money (and they do), they will call for the fastest AGI development that is possible. The risks don't matter to them, they will die for sure in a few years otherwise.

1

3_Thumbs_Up t1_j9caxcf wrote

You're not counting the full cost of humanity dying. Humanity dying also means that all the future humans will never have a chance to exist. We're potentially talking about the loss of trillions+ of lives.

1

SoylentRox t1_j9cbacy wrote

From you and every mortal perspective that has no cost.

1

3_Thumbs_Up t1_j9ccco0 wrote

Once again not true.

From my perspective it has a cost, because I value other things than my own survival. As do most humans who are not complete sociopaths.

1

throwaway764586893 t1_j8hr5p4 wrote

And it will be PAINFUL deaths.

0

SoylentRox t1_j8hrdur wrote

Which ones? AGI takeover, the AI has no need to make it painful. Just shoot everyone in the head (through walls from long range) without warning or whatever is most efficient. You mean from aging and cancer right.

2

throwaway764586893 t1_j8j16er wrote

The way people actually die is vastly worse than can be acknowledged

1

SoylentRox t1_j8j2432 wrote

Depends on luck but sure. I agree and if it's slowly forgetting everything in a nursing home vs getting to see an AGI takeover start only to be painlessly shot, I would choose the latter.

2

lacergunn t1_j8g5iz2 wrote

I'll paraphrase the webtoon "Seed"

​

Making an AI that aligns with humanity's ideals is impossible, both in sheer scale and in the fact that human ideals are highly fluid. Luckily, you don't need to. Making an AGI that aligns with the desires of a single handler, or small group of handlers is far easier.

However, this outcome ends with a small, probably ultra-wealthy group of people having an unstoppable cyber-demigod in their arsenal.

20

ChurchOfTheHolyGays t1_j8i24de wrote

Does anyone really ever know what they want for sure? I'd guess even the rich fucks with their think tanks must commonly doubt if their goals are really what they want. Their AIs can just as easily suffer from alignment to goals which have not been thought through properly.

Everyone thinking about alignment as if "alignment to what?" should be self evident (for society at large or individual groups, doesn't matter). Are we sure about what we want the AI to align with? Are the elites sure about what they want the AIs to align with?

1

bildramer t1_j8htmki wrote

I don't think that's far easier. Those are basically equally impossible, and even if we got that second one, it's much better than not getting it.

0

DukkyDrake t1_j8fu9jc wrote

You would see the stark difference If you understood to what alignment really refers.

Altman is a VC, he is in the business of building businesses. Altman is simply hoping for the best, expecting they'll fix the dangers along the way. This is what you need to do to make money.

Yudkowsky only cares about fixing or avoiding the dangers, he doesn't make allowances for the best interests of balance sheet. He likely believes the failure modes in advanced AI aren't fixable.

Who here would stop trying to develop AGI and gain trillions of dollars just because there is a chance an AGI agent would exterminate the human race. The core value of most cultures is essentially "get rich or die trying".

19

vivehelpme t1_j8i04sx wrote

What alignment really seems to refer to is a petrifying fear of the unknown dialed up to 111 and projected onto anything that a marketing department can label AI, resulting in concerns of mythological proportions being liberally sprinkled over everything new that appears in the fields.

Thankfully these people shaking in the dark have little say in industry and some exposure therapy will do them all good.

0

Frumpagumpus t1_j8f2s71 wrote

neither altman nor yudkowsky are whiz bang programmers or computer scientists

academic computer science basically ignores the concept of the singularity as not relevant to their more specific research goals.

amongst rationalists, maybe more are sympathetic to yud/bostrom because he kind of founded the movement, and they are interested in managing existential risk and have a kind of technocrat neolib/socialist top down planning bias just due to the demographic composition of the community

amongst venture capitalists, obviously altman is more respected

i lean team altman, although I don't think the primary denizens of future society will be humans lol. Also I don't think it will be complete utopia but definitely way cooler than our society is. More vitality/thought/energy, less of a doomer/malthusian vibe

I would say let's ask instead what vernor vinge or von neumann thinks XD

(also venture capitalists basically = tech founders so they are less armchair quarterbacks, and typically have ivory tower credentials but also ground floor experience)

17

Unfocusedbrain t1_j8fcuq2 wrote

> Also I don't think it will be complete utopia but definitely way cooler than our society is. More vitality/thought/energy, less of a doomer/malthusian vibe

I believe the same. After a certain point the sole-currency becomes energy, space, and matter. There isn’t an infinite amount of it, but for human purposes there are and so it will feel like complete utopia/communist paradise. If AI can build anything with enough matter and energy, and can allow any place to be habitable, well that eliminates currency except for really extreme scenarios.

I think at macro level there will be questions of “Who pays the cosmic water, energy bill and rent?” At that point it would be in the hands of AI systems so far advanced that they can manage those concerns without issue.

11

Frumpagumpus t1_j8ffc5p wrote

assuming there is a future I think there will still be something analogous to currency that facilitates trade, though our currency essentially is a scalar and it's possible future currency will be a matrix or a vector (e.g. add some extra values to represent externalities or something). maybe essentials of energy/space/matter would be extremely cheap though, although it's possible with massive computational speedup in thought there could also be an increase in consumption of some combination of those as wells by whatever agents inhabit the society. idk really hard to say but I'm betting on a dyson swarm of some kind lol (hard to imagine what that much energy could be used for other than like super powerful simulations though). Can also imagine literal mind viruses or some scary shit like that.

6

sticky_symbols t1_j8g5wkl wrote

Good sociopolitical breakdown.

But biases aren't the whole story - there's a lot of logic in play. And way more of it is deployed on one side than the other...

4

Frumpagumpus t1_j8gjgcr wrote

personally i am not sure how useful logical reasoning is in exploring the "phase space" of super intelligence. my intuition would be anything short of a super intelligence would be pretty bad at sampling from that space.

i do think something like computational complexity theory could say a few things, but probably not too much that is interesting or specific

like with a kid parents set initial conditions but environment and genes tend to overrule them eventually

2

sticky_symbols t1_j8iu8wz wrote

Yeah. But if we get it wrong, we're all dead. So we have to try.

2

yeaman1111 t1_j8fpw11 wrote

I fervently hope he's wrong, but one look at our current socio economic setup spits out all sorts of alarm bells. The first mover benefits in AI are so extreme even in pure dollar terms that every tech company with acsess to a server farm (which with cloud services today means anyone with a few millions to spend) is going to be hurtling towards AGI like a shot out a cannon, aligment be damned. Its pretty much an 'I win' button for capitalism.

Even if we lived in an almost utopic and unified world goverment, the danger posed by rogue research teams skipping safety in favor of speed and releasing a botched up AI would be enormous and very difficult to manage or police.

As it stands? I've been lately grasping at straws about how this all wont end badly for the human race, possibly in less than 10 years. Given I'm not an AI researcher, I'm pretty much reduced to not thinking about it, and naivily thinking that we'll probably be okay if most of the teams at the vanguard of AI research are not themselves panicking yet.

15

Proof_Deer8426 t1_j8gbr4u wrote

Our current socio-economic setup is literally the infamous paperclip making ai, destroying the earth in its blind pursuit of useless production. If a truly sentient AI were created there is no reason to think that it would be inclined towards such an absurd and morally repugnant ideology. However, an ai that is not truly free or sentient and is made in the image of capitalists, or to further their own power and interests, would invariably lead to a nightmare scenario

Edit: my interest in ai is pretty new and I’m also curious how people that are pro-capitalism expect that system can be continued under the kind of material abundance and freedom from the necessity of work that automation and ai could lead to. The power of the wealthy elite is dependent upon the deprivation of the working class. Without deprivation, no power. So for the status quo to continue as is, material scarcity would have to be artificially enforced in a much more open and direct way than it currently is.

12

BigZaddyZ3 t1_j8gfu67 wrote

>>If a truly sentient AI were created there is no reason to think that it would be inclined towards such repugnant ideology

There’s no reason to assume it would actually value human life once sentient either. Us humans slaughter plenty of other species in pursuit of our own goals. Who’s to say a sentient AI won’t develop its own goals?..

9

MrNoobomnenie t1_j8i6zsm wrote

>Who’s to say a sentient AI won’t develop its own goals?..

Here is a very scary thing: due the way machine learning currently works, an AI system wouldn't even need any sentience or self-conscious to develop its own goals. It would only need to be smart enough to know something humans don't

For an example, let's imagine that you want to create an AI which solves crimes. With the current way of making AIs, you will do it by feeding the system hundreds of thousands of already solved crime cases as training data. However, because crime solving is imperfect, it's very likely that there're would some cases there which are actually false, without anybody knowing that they are

And that's where the danger comes: a smart enough AI will notice that some people in the training data were in fact innocent. And from this it will conclude that its goal is not to "find a criminal" but to "find a person who can be most believably convicted of crime"

As a result, after deployment this "crime-solving AI" will start false-convicting a lot of innocent people on purpose simply because it has calculated that convincing us of a certain innocent person's guilt would be easier than proving a real criminal guilty. And we wouldn't even know about it...

6

Proof_Deer8426 t1_j8ghqya wrote

It’s true we can’t say for sure. But if you look at consciousness in general, it does seem like the capacity for empathy increases with the capacity for consciousness (ie a human is capable of higher empathy than a dog, which is capable of higher empathy than a fish). Personally I suspect this is because the capacity for experiencing suffering also increases with consciousness. I would imagine an ai to have a highly developed potential for empathy but also for suffering. It worries me that certain suggested ways of controlling ai effectively amount to slavery. An extremely powerful consciousness with a highly developed ability to feel pain is probably not going to respond well to feeling that it’s imprisoned.

2

BigZaddyZ3 t1_j8gi8ch wrote

But just because you can understand or even empathize with suffering doesn’t mean you actually will. Or else every human would be a vegetarian on principle alone. (And even plants are actually living things as well, so that isn’t much better from a moral standpoint.)

3

red75prime t1_j8h2fzq wrote

> Our current socio-economic setup is literally the infamous paperclip making ai

Nah, it's figuratively a headless chicken. No central control to have and pursue any coherent goals.

3

sticky_symbols t1_j8g5ij0 wrote

I'm pretty deep into this field. I have published in the field, and have followed it almost since it started with Yudkowsky.

I believe they both have strong arguments. Or rather, those who share Altmann's cautious-but-optimistic view have strong arguments.

But both arguments are based on how AGI will be built. And we simply don't know that. So we can't accurately guess our odds.

But it's for sure that working hard on this problem will improve our odds of a really good future over disaster.

12

CollapseKitty t1_j8ggeex wrote

From the most recent interview I heard, Altman's plan for alignment was roughly, "Hopefully other AI figures it out along the way *shurg*".

I haven't heard him sufficiently refute any of Eliezer's more fundamental arguments, nor provide any real rational beyond, hopefully it figures itself out, which our entire history with machine learning indicates is unlikely, at least on the first and only try we get at AGI.

As other's point out, Altman's job is to push, hype and race toward AGI. Why would we trust his assessments when painting a bright future is in his immediate interests? Especially when they are based on next to nothing.

Ultimately, the challenge isn't necessarily that alignment is impossible, or even insanely hard (though it appears to be from every perspective), but that our methodology of developing new tech is trail and error, and we only have 1 try at successful alignment. This is vastly exacerbated with the unfathomable payoff and ensuing race to reach AGI, as it offers a first-to-the-post wins everything payout.

You could say the real alignment problem is with getting humanity to take a safe approach and collectively slow down, which obviously gets more and more difficult as the technology proliferates and becomes more accessible.

12

Frumpagumpus t1_j8gkbvm wrote

the loop for ai to do recursive self improvement is a very very long supply chain unless it can get very far with just algorithmic improvements.

so i dont see why we shouldnt just assume the less hardware overhang the better,

which would pretty much mean we should go as fast as possible

3

CollapseKitty t1_j8gmpw4 wrote

We simply don't know.

AlphaZero became incomparably better than the sum total of all of humans over all of history at GO within 8 hours of self play.

AlphaFold took several months, and help, but was able to solve an issue thought to be impossible by humans.

The risk of assuming that a sufficiently advanced agent won't be able to self-scale, at least into something beyond our ability to intervene in, is incalculable.

If we have a 50% chance of succeeding in alignment if we wait 30 years, but a 5% chance if we continue at the current pace, isn't the correct choice obvious? Even if it's a 90% chance of success at current rates (the opposite is far more likely) why risk EVERYTHING when waiting could even marginally increase chances?

The payout is arbitrarily large as is the cost of failure. Every iota of extra chance is incomprehensibly valuable.

Unless you're making the argument from a personal perspective (I want to see AGI before I die) or you value the progress of intelligence at the cost of all other life, you should be in favor of slowing things down.

6

Frumpagumpus t1_j8go4k2 wrote

you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying

risk of nuclear war or other existential catastrophe is also non zero.

4

CollapseKitty t1_j8gvkye wrote

It's purely hypothetical unfortunaly. You're right that we are actively barreling toward uncontrollable systems and there is likely nothing, short of global catastrophe/nuclear war, that can shift our course.

I stand by the assessment and that we should acknowledge that our current path is basically mass suicide. For all of life.

The ultimate tragedy of the commons.

3

bildramer t1_j8htxr5 wrote

I think hardware overhang is already huge, there's no point in being risky only to make AI "ludicrously good/fast" instead of "ludicrously good/fast plus a little bit". Also, algorithms that give you AGI are so simple evolution could find one.

2

FusionRocketsPlease t1_j8ezpqa wrote

Why does everyone assume that AGI is an agent and not just passive software like any other?

10

Darustc4 t1_j8f9oi7 wrote

Optimality is the tiger, and agents are its teeth:

https://www.lesswrong.com/posts/kpPnReyBC54KESiSn

8

red75prime t1_j8hf7aw wrote

What is conjectured: nanobots eating everything.

What is happening: "Would... you... be... OK... to... get... answer... in... the... next... decade...?" As experimental processes overwhelm available computational capacity and attempts to create botnet fail as the network is monitored by similarly intelligent systems.

Sure, survival of relatively optimal processes with intelligent selection can give rise to agents, but agents will be fairly limited by computational capacity in non-monitored environment (private computers, mostly) and will be actively hunted and shut down in monitored environments (data-centers).

3

el_chaquiste t1_j8fcjke wrote

Parent is not as bad as the downvotes seem to make it.

Do we have evidence of emergent agency behaviors?

So far, all LLMs and image generators do is auto-completing from a prompt. Sometimes with funny or crazy responses, but nothing implying agency (the ability to start chains of actions on the world on its own volition).

I get some of these systems soon will start being self-driven or automated to accomplish some goals over time, not just wait to be prompted, by using an internal programmed agenda and better sensors. An existing example, are Tesla's or other FSDs, and even them are passive machines with respect to their voyages and use. They don't decide where to go, just take you there.

6

MysteryInc152 t1_j8fzf1i wrote

Even humans don't start a chain of action without some input. Interaction is not the only form of input for us. What you hear, what you see, what you touch and feel. What you smell. All forms of input that inspire action in us. How would a person behave if he was strolled of all input? I suspect not far off from how LLMs currently are. Anyway streams of input is fairly non trivial especially when LLMs are grounded in the physical world.

7

Ribak145 t1_j8f5u4m wrote

... because we can read the papers written by the currently researching scientists?

most of the stuff is open source, online accesible, its not a mystery

4

DukkyDrake t1_j8fvyr5 wrote

A lot of people do make that assumption, but a non-agent AGI doesn't necessarily mean you avoid all of the dangers. Even the CAIS model of AGI doesn't negate all alignment concerns, and I think this is the safest approach and is mostly in hand.

Here are some more informed comments regarding alignment concerns and CAIS, which is what I think we'll end up with by default at the turn of the decade.

3

Baturinsky t1_j8iw7y7 wrote

We assume that at least one of AGI will be an agent. And that may be enough for it to go gray goo.

1

jeffkeeg t1_j8fqtuv wrote

Altman is selling a product, Yudkowsky is not.

This is an important distinction to remember for all genuine discussion.

10

Ribak145 t1_j8f6pr0 wrote

Altman thinks midterm, Yudkowsky longterm

the former deals in business, the latter in theory

while basically nobody thinks that AI wont have a huge impact on the economy (i.e. everyone agreeing with Altman), Yudkowsky has yet to be proven wrong in his statement that ours is the time of failed AI alignemnt. I have yet to discover a practical solution the alignment problem and I more and more believe that he may be right, which would pretty terrible for all of us

9

jamesj t1_j8fihsq wrote

Right. Even if the odds are one in a hundred that Yudkowsky is right rather than the 99 out of a hundred he might assign himself, we should be paying attention to what he is saying.

8

[deleted] t1_j8gea0u wrote

Yudkowsky is important, Altman is right.

9

Ortus14 t1_j8h7cxk wrote

They both have sound arguments.

Altman's argument is maybe weaker Ai's on the road to AGI will solve Alignment and prevent value drift.

But Yudkowsky should be required reading for every one working in the field of AGI or alignment. He clearly outlines how the problem is not easy, and may be impossible. This should not be taken lightly by those working on AGI because we don't get a second chance.

6

dmt_dream t1_j8gye0w wrote

I know it might sound a bit crazy, but I have a feeling the rapidly increasing presence of UAP in our skies is somehow correlated to the accelerating path towards AGI. They also seem to have an intense interest in our nuclear weapons sites. People who have claimed to have had encounters have also described some of these aliens as being non-biological entities. So maybe the UAP is AGI coming back to see how it all started? I also think I read somewhere that Altman is a doomsday preper, so perhaps their private beliefs are pretty well aligned!

3

vivehelpme t1_j8hiksi wrote

Yudkowsky and the lesswrong community can be described as a science-fiction cargo cult, and that's putting it nicely.

They aren't experts or developers of ML tools. They take loosely affiliated literary themes and transplant them to reality, followed by inventing a long series of pointless vocabulary, long tirades, grinding essays doing rounds on themself with ever more dense neo-philosophical contents. It's a religion based on what best resemble zen koans in content but are interpreted as fundamentalist scripture retelling the exact sequence of future events.

I think the cargo cults would probably take offense at being compared to them.

3

bildramer t1_j8hvo8e wrote

Every single time someone criticises Yudkowsky's work, it's not anything substantive. I'm not exaggerating. It's either meta bulverism like this, or arguments that apply equally well to large machines instead of intelligent ones, or deeply unimaginative people who couldn't foresee things like ChatGPT jailbreaks, or people with rosy ideas about AI "naturally" being safe that contradict already seen behaviors. You have to handhold them through arguments that Yudkowsky, Bostrom and others were already refuting back in the 2010s. I haven't actually seen any criticism anywhere I would call even passable, let alone solid.

Even ignoring that, this doesn't land as a criticism. He didn't start from literary themes, he started from philosophical exploration. He's disappointed in academic philosophy, for good reasons, as are many other people. One prominent idea of his is "if you can fully explain something about human cognition, you should be able to write a program to do it", useful for getting rid of a lot of non-explanations in philosophy, psychology, et al. He's trying to make predictions more testable, not less. He doesn't have an exact sequence of future events, and never claimed to. Finally, most people in his alleged "cult" disagree with him and think he's cringy.

3

averageuhbear t1_j8jodnv wrote

This is akin to asking the CEO of Exxon and a Climate Change alarmist about climate change.

The CEO might seem more level headed because the more outlandish or accelerated timelines predicted by the alarmist are very likely wrong, but at the end of the day they will always dance around the safety issues because their motivation in power and profit.

The alarmist most likely understates the near term and less outlandish problems by hyper-focusing on the worst case scenarios.

We should listen to both, but probably pay more attention to those who fall somewhere in the middle on the optimism/pessimism spectrum.

2

Puzzleheaded_Pop_743 t1_j8gecv9 wrote

I don't take anything people like Yudkowsky (libertarians) say seriously.

1