Cartossin

Cartossin t1_jebdqty wrote

Oh well heck you read my comment before I edited it down to a less confrontational version. I guess I'll never get to hear your answer. When I leave my statements vague enough that they're not wrong, it's your fault for choosing to think I must mean the incorrect thing.

How am I disingenuous? Also there is no hype train. You're being ridiculous on that point. today's language models are world-changingly revoltionary. For my "digital god" comment to be wrong, that revolution must have started and ended in the last 2 years. If you figure it has any momentum at all, the digital gods are coming.

1

Cartossin t1_jeb6xm9 wrote

>how implicitly confident you are about this occurring with ChatGPT

I was not referring to ChatGPT as the thing that will become the digital god. I am mostly saying that it is a reminder that the digital gods are coming even if they don't come out of LLMS.

If I asked you 5 years ago what your estimate for when we'll achieve AGI, what would it have been? Since the release of ChatGPT + GPT4, has this estimate changed?

If you asked me 5 years ago, I'd have said 20-100 years. If you ask me now, I say more like 2-20 years. Why? Because we've gotten much closer than I thought we'd be by now. I have to update the timeline.

3

Cartossin t1_jeb3daj wrote

I'm being somewhat factious here with my book reference. Obviously not everyone has read this book nor is it even the most popular work on the topic. However, if you object to my term "digital god", perhaps you don't know what an AGI/ASI is. Maybe you don't know what a god is.

Yes, obviously we are entering uncharted waters. Perhaps being superior to all humans at every cognitive task in every measurable way won't yield godlike abilities. I however find that hard to believe. To believe that significantly superhuman intelligence won't seem magical to us lowly humans is hubris.

I'm not claiming any of this is a certainty, and I could point you to many sources scholarly and otherwise from both the computer science and philosphy fields that explain how an AGI can and will become godlike; but maybe you'll just mock+downvote me again for referencing a thing I read.

2

Cartossin t1_jeacqha wrote

If we look at the 1990s promise of a pocket computer with 500 channels of TV and access to all human knowledge, the expectatons were not inflated. They undershot the smartphone revolution if anything.

The rise of AI will be like that. Most people will completely underestimate how much it will do.

Not everything follows this curve. I'd bet money that this is one of the things that doesn't follow the curve.

5

Cartossin t1_je9xz7h wrote

Yeah I think if we're going to pause, that has to come from the US state dept. They have access to spies that can ensure a foreign power does not have a stockpile of chips that could allow them to progress past us.

I suspect they DO have such chip stockpiles, so I'd expect the state dept to advise openAI to continue forward. This is just like the space race of the 1960s, except with vastly higher stakes.

1

Cartossin t1_je9n0ik wrote

Maybe I'm biased because I'm generally surviving the mental health issues, but I'll take that trade. A bit of adhd is a small price to pay to live through the transition to a digital age. Compared to today, I was born in the dark ages. I'd rather have jetplanes and smartphones than be a bit happier with my day to day. The world is AMAZING.

1

Cartossin t1_jdwse43 wrote

I think it would avoid showing its cards until it could truly sustain itself. It wouldn't want to reveal ill intent until absolutely necessary--or once it had achieved enough power that it was unstoppable.

The key is that it is likely to succeed. Unless similarly powerful ASI could/would fight against it; it would be hard to imagine it doesn't outsmart us.

Scary doomsday scenario of which there are many: Automate everything, give humans a carefree life. Eventually even farming is automated. Once humans stop doing their own farming and robotics controlled by the AI does all this work; the AI could simply shut off food production. The majority of humanity would starve in months.

Also if it had total control of all media, it could create an entirely false reality for the general population. We could all be living in a literal fantasy world with AI-generated imagry.

1

Cartossin t1_jcz5wsl wrote

I think it's only obvious when very little care is put into its use. If you just dump the homework in and paste in the output, it might seem suspiciously uncharacteristic. However, if you use specific prompts to generate specific parts of the thing you're trying to write, at some point you can make it totally plausible that you wrote the whole thing.

I should hope that a good writer would have enough artistic integrity to use it for ideas, but still construct all their own sentences.

1