EndTimer

EndTimer t1_ja3q78e wrote

This doesn't seem to add up to me.

First, the future doesn't appear to be set in stone, and treating statistics like it's a spawn chance against every slot that might exist doesn't work. There may be a quadrillion people in 5000 years, or there may be zero. You can't roll dice against schroedinger's humans, at least not with this kind of intuitive math.

Second, demographers estimate 109 billion people have lived and died in the past 192,000 years. While you have a higher chance of being born in this period over any singular, specific period prior, the vast majority of human lives exist in the bulk who are already gone.

Put another way, there's more people than ever right now, but if you had even odds of being born at any time in human history up till now, there's a 92.7% chance you'd already be dead in 2023.

16

EndTimer t1_j9lcc8k wrote

I'm talking about everything from fake news to promoting white supremacy on social networks.

I'm thinking about what it's going to be like when 15 users on a popular discord server are OCR + GPT (>=) 3.5 + malicious prompting + typing output.

AI services and their critics have to try to limit this and even worse possibilities, or else everything is going to get overrun.

3

EndTimer t1_j9l48jm wrote

Because people doing bad things on the internet is a half-solved problem. If you're a user on a major internet service, you vote down bad things or report them. If you're the service, you cut them off.

Now we're looking at a service generating the bad things itself if given the right prompt. And it's a force multiplier. You can say something bad a thousand ways, or create fake threads to gently nudge readers toward the views you want. And if you're getting buried by the platform, you can ask the AI to make things slightly more subtle until you find the perfect way to fly beneath the radar.

You can take up vastly more human moderator time. Sure, we could let AI take up moderation, but first, is anyone comfortable with that, and second, how much electricity are we willing to burn on bots talking to each other and moderating each other and trying to subvert each other?

IF you could properly, unrealistically, perfectly align these LLMs, you would sidestep the entire problem.

That's why they want to try.

15

EndTimer t1_j9kl706 wrote

We would have to read the study methodology to evaluate how they were testing GPT 3.5's image context.

But in this case, multimodal refers to being trained on not just text (like GPT 3.5), but also images associated with that text.

That seems to have improved their model, which requires substantially fewer parameters while scoring higher, even in text-only domains.

4