No_Ninja3309_NoNoYes

No_Ninja3309_NoNoYes t1_jdmzvjk wrote

Yes, it's really easy apparently. You can take a basic image and change the ethnicity of the model in a sort of Yahoo Pipes UI. I don't have strong emotions about this. But people will lose jobs or not get hired at least. We should do something out of solidarity even if it doesn't seem a big thing. I mean, before long there will be one guy posing to replace thousands of models. No more actors, no more artists, no more writers. Only Altman and the Microsoft cloud...

1

No_Ninja3309_NoNoYes t1_jdh070y wrote

  1. Radiate out from my home to neighboring cities. Each week staying in a different one, thanks to cheap rooms to rent or whatever.

  2. Avoid cold weather by adopting a temporary new base once a year.

  3. Play the guitar.

  4. Make websites for fun with GPT 12.

  5. Make simple games for fun.

  6. Learn languages.

  7. Play badminton or squash.

  8. Write stories with GPT 12.

  9. Make images for the stories in 8 with Stable Diffusion 12.

  10. Post generated content on websites made in 4.

  11. Brag about my content from 10 on social media.

  12. Make up new activities with GPT 12.

1

No_Ninja3309_NoNoYes t1_jd904so wrote

It depends on many factors. My own observations point to rural exodus. Inertia being what it is, it seems that the trend will continue.

It could be that we have a third option: AI nomads. Assuming swarm intelligence and the IoT merge, there's no reason to stay rooted. Uber and Airbnb can facilitate a caravan existence. You only need minimal hardware to work after all.

And if UBI becomes ubiquitous, there's no reason to limit yourself to a single location as your tribe can be anywhere and nowhere at the same time...

1

No_Ninja3309_NoNoYes t1_jd6vc1d wrote

There are trillion parameters models that have been trained on vast data oceans. But it is expensive to expose the general public to them. However, large companies are a different story. We'll transition to swarm intelligence over a decade. At a certain point the data centers will reach a natural ceiling. But 4 bit quantized, optimized instances would have spread around the world.

0

No_Ninja3309_NoNoYes t1_jd39yow wrote

Billionaires say that they are humble and caring from their private islands and jets, but democracy gets in the way. People want to hang onto their jobs and way of life. They will vote for the party that would protect them. They will go on strikes. Sounds inconvenient for the billionaires.

1

No_Ninja3309_NoNoYes t1_jcjf3je wrote

Apparently OpenAI reduced the cap of GPT 4 from 100 to 50 messages. It's crashing all the time. Compared to Claude the older version can't handle the instructions I gave it. But that could be my lack of prompt engineering skills. Open assistant came out with a demo version. I haven't been able to play with it or Gerganov's project. There's just so much out there. FOMO is rising to peak levels!

13

No_Ninja3309_NoNoYes t1_jb8lduv wrote

There are many different types of roadblocks that could occur in varying degrees of likelihood:

  1. Lack of data. Data has to be good and clean. Cleaning and manipulation takes time. Purportedly Google research claims that compute and data have a linear relationship, but I think that they are wrong. Obviously, this is more of a gut feeling, yet IMO their conclusions were premature based on too few data points and self-serving.

  2. Backprop might not scale. The thing is that you go down or back to propagate errors and try to account for them. That's like that game that some of you might have played where you whisper a word to someone else and he or she passes them on. IMO this will not work for large projects.

  3. Network latency. As you add more machines the latency and Amdahl's law will limit progress. And of course hardware failure, round-off errors, and overflow can occur.

  4. Amount of information you can hold. Networks can compress information but if you compress it too much, you will end up with bad results. There's exabytes of data on the Web. Processing it takes time and with eight bytes or less per parameters, you can have an exa parameters model in theory. However irl that isn't practical. Somewhere along the path, probably at ten trillion parameters, networks will stop growing.

  5. Nvidia GPUs can do 9 teraflops. A trillion parameters model would allow about nine evaluations per second. Training is magnitudes more intense. As the needs for AI grow, supply and demand of compute will be mismatched. I mean, I was using three multi billion parameters models at the same time yesterday. And I was hungry for more. One of them was slow, the second gave insufficient output, and the third was hit and miss. If you upscale 10x, I think that I still would want more.

  6. Energy requirements. With billions of simultaneous requests a second, you require a huge solar panels farm. That's maybe as many as seven solar panels, depending on conditions, per GPU.

  7. Cost. GPUs could cost 40K each. Training GPT costs millions. With companies doing independent work, billions could be spent annually. Shareholders might prefer using the money elsewhere. It's not motivating for employees if the machines become the central part of a company.

3

No_Ninja3309_NoNoYes t1_jaq7gl3 wrote

You need exaflops, the equivalent of a million Nvidia GPUs. And the brain has to use less than a thousand watts. Even if you go full analog and hardwire with the current architectures, you will not succeed. Massive trimming, low precision, and probably forward forward instead of backprop is required. But that will likely only produce dumb robots with static microbrains.

It's much easier to train chimpanzees. Or create chimp cyborgs. Realistic spiking neural networks with neuromorphic hardware could get us robots, but it will take decades.

0

No_Ninja3309_NoNoYes t1_jacwzfq wrote

Purportedly Twitter has 20M LoC Scala. Scala is a JVM language that is somewhat more concise than Java. IDK how much of that is unit tests, documentation, and acceptance tests. Anyway style, programming language and culture matter. Some coders can be verbose, others just want to get the job done. You can write unreadable code in any language. This is fine for small projects because you can figure out what is going through trial and error. For Twitter it will not work. The bigger the teams the clearer and more defensive you have to code. Defensive code is verbose since you are checking for preconditions that might rarely occur. Some languages are more verbose than others.

But anyway no one codes bottom up. You usually start with a global design and iterate multiple times using mock ups if something is still vague. I don't think your question has an answer right now. Someone has to try it and see what the issues are.

2

No_Ninja3309_NoNoYes t1_ja94ydf wrote

So obviously in hunter gatherers societies, some were better than others in their job. In agricultural setting skills were not evenly distributed. And also most people want to take care of their offspring. Furthermore having a clear heir such as the first born son was preferable to democracy or whatever in certain circumstances. Plus, tradition and inertia and belief in a mandate from heaven added to that. But on the whole, aristocrats are just people with nothing special to them.

Obviously you can spend your days studying and exercising or socializing. You only need to read a few chapters of War and Peace to get some other general ideas. Drinking, gambling, partying, and flirting makes for a great book, but leaving a lasting legacy somehow sounds better. It could be AGI, ASI, a Dyson swarm, or something else entirely that would be like a drop in the ocean compared to what ASI can do.

4

No_Ninja3309_NoNoYes t1_ja7roq9 wrote

But the prompt engineering jobs will be all over the place. AI is a black box, so there is still a lot of work to do. Besides ChatGPT can't find all the information on the web yet. It can't decode images or videos. And the text it produces needs to be edited and checked. So there's jobs for editors and fact checkers. In the best case, we'll have UBI or four day work week in a decade. In the worst the elite will replace as many people as they can with cloned cyborgs.

1

No_Ninja3309_NoNoYes t1_ja7om6u wrote

Some people want to leave a legacy be it through ideas, passing in their genes, or building literal or metaphorical empires. So if humanity's legacy is AGI or ASI, what will AGI or ASI's legacy be? I hope it's not just getting really good at chess or Go.

Something intangible like style or personal preferences seems fleeting. Brad Pitt is replaceable. Anyway these kind of preferences are subjective. But there must be something fundamental to certain works of art and stories. The ones that were made thousands of years ago.

And you need to consider how tastes and ideologies evolved over time. Much of what was acceptable in the time period of William Shakespeare is now unacceptable. ASI could play a role here. It could simulate human society through alternative futures. A crystal ball of possible tomorrows...

3

No_Ninja3309_NoNoYes t1_ja7ky3p wrote

This is the end of history illusion. I had it when I was eight, eighteen, and so on. I mean, witnessing the fall of communism, the birth of the Internet, and now AI is just shocking. In the long run, none of that matters. In a billion years, Earth will be uninhabitable. Billions of years later the Sun will die and the Andromeda galaxy will crash violently into the Milky Way. We are just tiny specks on a little sphere in the galaxy. At most we can produce ASI, and AFAIK it is not like ASI can get far...

1

No_Ninja3309_NoNoYes t1_ja7f0sc wrote

I did an OpenGL course once but had to bail because of a more important project. Bought a book and attended classes. A friend of mine made a rough animation of several seconds without sound. I guess it is fun to do stuff like that. But what will the professionals do now? Maybe they will teach amateurs for a while. If teachers get replaced...

1

No_Ninja3309_NoNoYes t1_ja4yt8h wrote

Science fiction magazines are getting overwhelmed with short stories made by AI. They are more of a nuisance than something actually worth reading. Maybe in a decade this will change, but for now I think that you can't take it too seriously. And software development is more than just writing short simple functions. You need to write test code and documentation. Usually you need to go through several iterations with unclear user stories. AI is currently not flexible enough to handle that.

5

No_Ninja3309_NoNoYes t1_ja3r1zw wrote

I have no PhD in economics, but it seems to me that Altman will say anything to attract new investors. What he says doesn't make sense to me either, and he might not really believe it himself. Anyway having lots of personal robots like in a science fiction story won't be feasible for decades. IMO you can have several self-driving cars and simple robots but nothing capable of replacing skilled workers.

Currently Deep Learning systems are static, meaning that they are trained once and their parameters don't change. IMO that is not good enough. More realistic spiking neural networks are small because no one is that interested in them yet. Spinnaker in Manchester can simulate about 8 millions synapses. Spinnaker 2 that TU Dresden is building is ten times larger, but as I said they have a small budget. If they receive billions and with a bit of luck other things improve, we could get 80 billion/trillion simulated synapses or more. Not enough for a full simulation of a brain but maybe good enough for some of Altman's proposals.

2

No_Ninja3309_NoNoYes t1_ja2w92l wrote

I haven't read the paper, but my friend Fred says that they used a simple model to decide what goes into the training data. That would explain the 10x smaller size. Or one of us misunderstood. I mean, you could download the data in theory and grep for whatever you are interested in. Let's say psychology. Then get the code and GPUs in the cloud. You can crowdfund this if there's enough interest. I guess the more niche topics would be also the cheapest to do.

2

No_Ninja3309_NoNoYes t1_ja20lr5 wrote

My friend Fred says that programming languages will be 10x faster than now because they would have better compilers. I think graphene will arrive in computer chips. Some things will improve a lot, others less so. I am hoping for neuromorphic hardware and spiking neural networks in a decade, but we'll have to wait and see.

4

No_Ninja3309_NoNoYes t1_j9yarqg wrote

I don't think AGI will arrive before 2040. It could in theory, but if you extrapolate all the known data points, it's not likely. First, in terms of parameters, which is not the best of metrics, we are nowhere near the complexity of the human brain. Second, AI models currently are too static to be accepted as candidates of AGI.

Your reasoning reads as: 'we created a monster. The monster is afraid of us, so it kills us.' You can also say the opposite. People were afraid of Frankenstein's monster, so they killed him.

Prometheus stole fire from the gods and was punished for it. OpenAI brought us ChatGPT and one day they will burn for it too. AGI/ASI either is a threat and smarter than us or it isn't. If it is both, they could decide to prevent being attacked. But as I said it would take decades to reach that point. And we might figure out in the future how to convince AGI/ASI that we're mostly harmless.

1