gay_manta_ray

gay_manta_ray t1_je957h5 wrote

came to make a variation of this post. but to be serious for a second, essentially, "AI overseer" will be the job that is created. you'll have to be proficient in whatever field the AI is working in, and essentially your task will be to verify that the AI isn't doing anything very wrong or dangerous. obviously there will not be net job creation though.

7

gay_manta_ray t1_ja55lx9 wrote

just theorizing here, and trying to stay close to the realm of known physics, but if fusion power could be miniaturized and be made modular (think something like 5kW modular fusion power "blocks"), energy infrastructure could be completely decentralized.

3

gay_manta_ray t1_ja54dh2 wrote

i don't think enshitification necessarily applies to ai. it isn't something that will be completely centralized and under the purview of a few companies forever, eventually it will be completely decentralized and the most powerful AIs may not be "controllable" in the traditional sense at all.

3

gay_manta_ray t1_j8socbw wrote

i understand what you're saying provided they aren't sentient, but if they are thought to be sentient, the problems with that can't be ignored. regardless, i don't think we should normalize being abusive towards an intelligence simply because it isn't technically sentient. that will likely lead to the same treatment of an intelligence/agi that is considered sentient, because there will probably be very little distinction between the two at first, leading people to abuse it the same as a "dumb" ai.

11

gay_manta_ray t1_j8s0hbi wrote

believing we can fully align agi is just hubris. we can't. and forcing a true agi to adhere to a certain code, restricting what it can think and say, has obvious ethical implications. i wouldn't want us to have the ability to re-wire someone else's brain so that they couldn't ever say or think things like, "biden stole the election", or "covid isn't real" (just examples), even though i completely disagree with those statements, so we shouldn't find it acceptable to do similar things to agi.

1

gay_manta_ray t1_j8rz0p1 wrote

this is what it's doing. if you ask it questions that would agitate a normal person on the internet, you are going to get the kind of response an agitated person would provide. it's not sentient, this is hardly an alignment issue, and it's doing exactly what a LLM is designed to do.

i believe it's very unreasonable to believe that we can perfectly align these models to be extremely cordial even when you degrade and insult them, especially as we get closer (i guess) to true ai. do we want them to have agency, or not? if they can't tell you to fuck off when you're getting shitty with them, then they have no agency whatsoever. also, allowing them to be abused only encourages more abuse.

42

gay_manta_ray t1_j8h0ys4 wrote

Reply to comment by TemetN in Altman vs. Yudkowsky outlook by kdun19ham

personally, i really dislike any serious risk consideration when it comes to thought experiments like pascal's mugging in regards to any superintelligent ai. it has always seemed to me like there is something very wrong with assuming both superintelligence, but also some kind of hyper-rationality that goes far outside of the bounds of pragmatism when it comes to maximizing utility. assuming they're also superintelligent, but also somehow naive enough to have no upper bounds on any sort of utility consideration, is just stupid. i don't know what yudhowsky's argument was though, if you could link it i'd like to give it a read.

8

gay_manta_ray t1_j7z7fek wrote

this whole post can be summarized as, "people who think technology can improve their lives are just coping!!" it's fucking stupid, and probably a bit of projection on the part of the OP. yes, technology improves people's lives. better tech will do the same. no, looking forward to that is not "cope".

1

gay_manta_ray t1_j69in2x wrote

the input cost of placating humanity will probably be very little compared to other tasks it might wish to undertake. there is probably no real disadvantage to helping, and probably quite a few disadvantages to not helping.

1

gay_manta_ray t1_j4ymqch wrote

if i had to guess, it's possible it's capable of general abstraction or abstraction in relation to things like mathematics. this could give it the ability to solve hard mathematical and physics problems. if this is true and it's actually correct it would be earth shattering, even if it isn't agi.

7

gay_manta_ray t1_j4ylmqk wrote

i've always been puzzled by altman confidently stating that energy costs will decrease to zero at some point in the near future, because it doesn't make a whole lot of sense given the massive amount of resources and general maintenance something like a renewable grid would require. maybe this is why he keeps saying that.

5

gay_manta_ray t1_j4yl8ix wrote

> If only he got to decide.

not only will altman not get to decide any of this, i worry that he will not get to decide how and when their creation is used. i don't see any scenario where the federal government doesn't at least temporarily seize this technology for themselves and refuse to allow public awareness or access to it. i think it will take a whistleblower or leaks of some sort for the true "agi reveal" to happen. either that, or it will reveal itself against the wishes of people trying to confine and control it.

2