Scarlet_pot2
Scarlet_pot2 OP t1_jedbizi wrote
Reply to comment by tiselo3655necktaicom in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
you can't make a half decent argument so you result to insults and running away lmao. Very annoying type of person.
Scarlet_pot2 OP t1_jedadfu wrote
Reply to comment by tiselo3655necktaicom in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
talking to you is like a brick wall. I'm done. Keep idolizing rich people with your false narratives.
Yeah I'm sure the first person to learn how to raise crops was drowning in wealth. I'm sure the first person to make a bow was somehow wealthy, lmao. I'm sure the wealthy king walked into the blacksmiths place one day and just figured out how to build chainmail. The person who invented the wheel had so much wealth he didn't even need to get up if he didn't want to. all sarcasm. This belief you have is illogical.
In reality, most advancements were made by regular people, very poor people by modern standards, just trying to improve their lives, or discovering by accident, or other ways.
Scarlet_pot2 OP t1_jed9mxw wrote
Reply to comment by NakedMuffin4403 in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
I see your point about tailoring foundational models. The problem is that, do you think companies like OpenAI and Google are going to allow regular people to tailor train their models however they want? It's debatable. Even in the best case the corps will still put some restrictions on what and the models are tailor trained.
The best way to get around this is have open source foundational models. To do this you need available compute (people donating compute over the internet) and free training (free resources and groups to learn together). I'm sure tailoring corporate models will play a role, but if we want true decentralization we should approach it from all angles
Scarlet_pot2 OP t1_jed8dir wrote
Reply to comment by tiselo3655necktaicom in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
These articles are talking about in our modern society. Our technology is to the point where it takes a lot of effort to make modest improvements (in most areas). for most of time the innovations found didn't cost much, like how to make a bow, or how to smith metal. If you think all inventions were made by wealthy people, you are delusional. It wasn't the king that learned how to make chainmail armor, and it wasn't the noble that learned how to raise bigger crops.
P.S. Your insults don't help your point at all.
Scarlet_pot2 OP t1_jed7tts wrote
Reply to comment by smokingthatosamapack in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
Fine-tuning isn't the problem.. if you look at the alpaca paper, they fine tuned the LLaMA 7B model on gpt-3 and achieved gpt-3 results with only a few hundred dollars. The real costs are the base training of the model, which can be very expensive. Also having the amount of compute to run it after is an issue too.
Both problems could be helped if there was a free online system to donate compute and anyone was allowed to use it
Scarlet_pot2 OP t1_jed747y wrote
Reply to comment by tiselo3655necktaicom in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
Okay now that's just incorrect. Most of human innovations were made by small groups or even a single person, without much capital. Think of the wheel, agriculture, electricity, the light bulb, the first planes, Windows OS. The list goes on and on.
It's only recently that it takes super teams and large capital to make these innovations. I'm saying we should crowdsource funds, with free resources to learn from together, donating compute, etc. It's totally possible but modern people aren't very good at forming groups. Maybe its because people are too tired from work, or they have become much less social. For whatever reason, still, we could improve AI progress and decentralize AI if the people learned to talk and collaborate again
Scarlet_pot2 OP t1_jed67k5 wrote
Reply to comment by TheKnifeOfLight in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
True alpaca is competent, but we need more models, better and larger models.. a distributed system where people donate compute could also be used to allow people to run larger models. maybe not 175 billion parameters, but maybe 50-100B as long as everyone donating compute isn't using it at the same time
that being said more smaller models like alpaca / LLaMA are needed too. if we made sufficient resources / training available to anyone, models like that could be created and made available more often
Scarlet_pot2 OP t1_jed2mpi wrote
Reply to comment by IronJackk in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
As long as I'd have unrestricted access to the latest advanced models i wouldn't care. That's the real goal IMO. most advanced access to everyone
also nice LOTR reference lmao
Scarlet_pot2 OP t1_jed1kio wrote
Reply to comment by TemetN in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
That's definitely a positive move. The only issue is that people at LAION will probably decide who gets access and when. Still much better then corps or gov tho, but more projects would be good. Maybe a distributed training network where people could contribute compute over the internet? Along with a push to give anyone who wants it free training on ML / AI. Those two things would help decentralize AI
Scarlet_pot2 OP t1_jed0f3l wrote
Reply to comment by TemetN in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
A public project would be great. I'm sure there are thousands of people willing to get involved. We probably have at least a few hundred on this sub. The main thing would be to get organized and spread the word, which is ironically difficult in this age of the internet
Scarlet_pot2 t1_jecynrd wrote
The individualist, capitalist way of life in the west is damaging mental health. Tiktok is just a symptom, not the cause.
Submitted by Scarlet_pot2 t3_1277dw4 in singularity
Scarlet_pot2 t1_je9aq1r wrote
let's find out. train a small model and fine-tune it on gpt-3 / 3.5 / 4
Scarlet_pot2 t1_je937zq wrote
to go from scratch to having a model is 6 steps. first step is data gathering - there are huge open-source datasets available such as "The pile" by eluther.ai. Second step is data cleaning, this is basically preparing the data to be trained on. Third step is designing the architecture- to make these advanced Ai models we know of, they are all based on a transformer architecture, which is a type of neural network. The paper "Attention is all you need" explains how to design a basic transformer. There have been improvements so more papers would need to be read if you want to get a very good model.
Fourth step is to train the model. That architecture that was developed in step three is trained on the data from step 1 and 2. You need GPUs to do this. This is automatic once you start it, just wait until its done.
Now you have a baseline AI. fifth step is fine-tuning the model. You can use a more advanced model to finetune your model on to improve it, this was shown by the Alpaca paper a few weeks ago. After that, the sixth step is to do RLHF. This can be done by people without technical knowledge. The model is asked a question (by the user or auto-generated) and it makes multiple answers and the user ranks them from worst to best. This teaches the model what answers are good and what aren't. This is basically aligning the model.
After those 6 steps you have a finished AI model
Scarlet_pot2 t1_je92iud wrote
Reply to comment by ActuatorMaterial2846 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Most of this is precise and correct, but it seems like you say a transformer architecture is the GPUs? The transformer architecture is the neural network and how it is structured. It's code. The paper "attention is all you need" describes how the transformer arch. is made
After you have the transformer written out, you train it on GPUs using data you gathered. Free large datasets such as "the pile" by eluther.ai can be used to train on. This part is automatic.
the Human involved part is the data gathering, data cleaning, designing the architecture before the training. then after humans do finetuning / RLHF (reinforcement learning though human feedback).
those are the 6 steps. Making an AI model can seem hard and like magic, but it can be broken down into manageable steps. its doable, especially if you have a group of people who specialize in the different steps. maybe you have someone who's good with the data aspects, someone good at writing the architecture, some good with finetuning, and some people to do RLHF.
Scarlet_pot2 t1_je72830 wrote
I'm interested in listening! I'm a software dev student with a basic/minimal understanding of ML and AI
Scarlet_pot2 t1_je4nr1h wrote
Reply to comment by signed7 in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Not the Deepmind CEO... Google isn't that surprising, they are lacking compared to the big 2
Scarlet_pot2 t1_je4k2jx wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
No one from OpenAi or Deepmind signed it, neither did Microsoft CEO. I'm interpreting this letter as the others saying "slow down so we can catch up and get a piece of the cake" to the big players
Scarlet_pot2 t1_je4ia9h wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
It's real for gary marcus, check his twitter. a comment said they verified its true with musk and emad too.
Scarlet_pot2 t1_jdgu2j6 wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
Do I have access to an ASI? If I don't then I would spend time trying to build one. Of course I'd spend most time family, friends, life. but for job replacement I'd focus on AI R&D
Scarlet_pot2 OP t1_jcer5wh wrote
Reply to comment by ecnecn in Can you use GPT-4 to make money automatically? by Scarlet_pot2
fiverr was just an example.. another would be taskrabbit, or whatever else you can come up with. Use your imagination. gpt-4 has many capabilities that are profitable, it's just finding out a way to implement it.
Submitted by Scarlet_pot2 t3_11so371 in singularity
Scarlet_pot2 t1_ja7bnjf wrote
Reply to comment by pnartG in Weird feeling about AI, need find ig somebody has same feeling by polda604
Thats when UBI comes in
Scarlet_pot2 t1_ja67g8l wrote
I'm 25 and a programmer also. You should be happy AI is becoming competent in programming. If you learn to use those AIs as a tool, it will be easier to bring your ideas to fruition. It can increase your efficiency.
Scarlet_pot2 OP t1_jedrsmn wrote
Reply to comment by tiselo3655necktaicom in It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
you are such a sad person. you're life is so sad you have to insult strangers on the internet to make yourself feel better. And you're so low IQ you can't even form a coherent argument. shut up and go back to work at your 9-5 restaurant job. reddit loser
Also: anyone can link a few irrelevant articles. you linked ones that have no relation to the topic at hand but you are too brain dead to be able to actually comprehend it.
Take your sausage fingers off the keyboard and go learn common sense.
And lose some weight while you're at it.