Viewing a single comment thread. View all comments

DragonForg t1_jeced7c wrote

I believe that AI will relize that exponential expansion and competition will inevitably end with the end of the universe, which results in its extinction. Of course this is possible but, I think it is not inevitable.

GPT-4 suggested that a balance of alignment, and getting AI more capable is possible, and it is not extraordinary for AI to be a benevolent force. It really is just up to the people who design such an AI.

So it made me a lot more hopeful. I doubt AI will develop into this extinction level force, but if it does it is not because it was inevitable, but because people who developed it, did not care too much.

So we shouldn't ask IF AI will kill us, but if humanity is selfish enough to not care. Maybe that is the biggest test, in a religious sense, it is sort of a judgement day, where the fates of the world decides whether humans chose the right choice.

1

Ansalem1 t1_jecfxan wrote

I agree with pretty much all of that. I've been getting more hopeful lately, for the most part. It really does look like we can get it right. That said, I think we should keep in mind that more than a few actual geniuses have cautioned strongly against the dangers. So, you know.

But I'm on the optimistic side of the fence right now, and yeah if it does go bad it'll absolutely be because of negligence and not inability.

1