Cryptizard

Cryptizard t1_jaa0vrt wrote

I think you are being pretty unfair to MJ. The faces are about a million times better than what stable diffusion can do, which is way more helpful than getting fingers or toes exactly right. It is also not true that it "obliterates any landscape made in MJ".

As far as FlexGen goes, if it is made possible it would take like an hour to process one prompt on a regular GPU. No one would want that.

2

Cryptizard t1_ja97t9t wrote

>By far the best AIs are being controlled by everyone

Midjourney seems like the best text-to-image model by far, way better than stable diffusion. And GPT models are fundamentally too large for regular people to run on their machines. They require expensive, enterprise hardware. There might be a lightweight version eventually, but it will always be inferior to the bigger models that run in the cloud.

2

Cryptizard t1_ja69a0b wrote

Reply to comment by Mason-B in So what should we do? by googoobah

It seems to come down to the fact that you think AI researchers are clowns and won’t be able to fix any of these extremely obvious problems in the near future. For example, there are already methods to break the quadratic bottleneck of attention.

Just two weeks ago there was a paper that compresses GPT-3 to1/4 the size. That’s two orders of magnitude in one paper, let alone 10 years. Your pessimism just makes no sense in light of what we have seen.

1

Cryptizard t1_ja648ox wrote

Reply to comment by Mason-B in So what should we do? by googoobah

Yes like I said everything you wrote is wrong. Moore’s law still has a lot of time left on it. There are a lot of new advances in ML/AI. You ignore the fact that we have seen a repeated pattern where a gigantic model comes out that can do thing X and then in the next 6-12 months someone else comes out with a compact model 20-50x smaller that can do the same thing. It happened with DALLE/Stable Diffusion, it happened with GPT/Chinchilla it happened with LLaMa. This is an additional scaling factor that provides another source of advancement.

You ignore the fact that there are plenty of models that are not LLMs making progress on different tasks. Some, like Gato, are generalist AIs that can do hundreds of different complex tasks.

I can’t find any reference that we are 7 orders of magnitude away from the complexity of a brain. We have neural networks with more parameters than there are neurons in a brain. A lot more. Biological neurons encode more than an artificial neuron, but not a million times more.

The rate of published AI research is rising literally exponentially. Another factor that accelerates progress.

I don’t care what you have written about programming, the statistics say that it can write more than 50% of code that people write TODAY. It will only get better.

1

Cryptizard t1_ja618dl wrote

Reply to comment by Mason-B in So what should we do? by googoobah

You said moores law has been slowing for decades and would be the main bottleneck for the future, I show you actual evidence that it has only very slightly started to slow since 2010 and somehow now that was your argument the whole time lol.

You say that current AI is the same as it was 15 years ago (I am using your exact language here), I point out that transformers are very new and different, you say oh but those are 5 years old.

This is the definition of moving the goalposts. Like I said, you are not interested in an actual discussion, you want to stroke your ego. Well, you aren’t as smart as you think friend. Bye bye.

1

Cryptizard t1_ja5yk73 wrote

Reply to comment by Mason-B in So what should we do? by googoobah

It’s astonishing how you make like a dozen points and almost every single one of them is flat wrong. I don’t want to argue with you since it seems like you are not open to new information, but I will say that Moore’s law has not been slowing down for decades, transformer/attention models are explicitly a new theory that has made the current wave of AI possible and was not like anything that was done before, and I am a computer science professor and I program all the time an am well-versed in what AI can and can’t do at the moment.

1

Cryptizard t1_ja5jkba wrote

Reply to comment by [deleted] in So what should we do? by googoobah

>it’s at least 60 years into the future.

With no argument, cool cool.

>We’re not in a courtroom, I don’t need to cite evidence

And I don't need anything to call you a dumb piece of shit with his head stuck up his ass. Miss me with your bullshit please.

1

Cryptizard t1_ja5d7l8 wrote

Reply to comment by ianitic in So what should we do? by googoobah

You can say that, but it doesn't make it true. The algorithms are extremely different. The attention/transformer model is what made all of this recent progress possible.

3

Cryptizard t1_ja51wb0 wrote

Reply to comment by boersc in So what should we do? by googoobah

No, lol, you are completely bullshitting here. It is extremely different, even compared to a few years ago. The advent of a transformer model literally changed everything. That's not to say that it is the only advancement, or even that it is ultimately the thing that will lead to AGI, but to claim that it is "not much different" is either uninformed or trolling.

0