Submitted by 10MinsForUsername t3_11b0na9 in technology
Comments
10MinsForUsername OP t1_j9v7zkl wrote
Meta actually has a decent portofolio of open source AI tools: https://github.com/orgs/facebookresearch/repositories?type=all
I personally like fairseq: https://github.com/facebookresearch/fairseq
Yes Facebook is trash in terms of privacy, but it doesn't mean the company isn't doing an actual scientific work.
[deleted] t1_j9va2xt wrote
[deleted]
[deleted] t1_j9vbw94 wrote
[removed]
AutoModerator t1_j9vbwcx wrote
Unfortunately, this post has been removed. Facebook links are not allowed by /r/technology.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
gullydowny t1_j9vc9y1 wrote
Here’s what it’s trained on - from their paper which I cant link to because of arbitrary Reddit mod rules but this looks like an extremely cool project
CerebralBypass01 t1_j9ve3k4 wrote
Just a few TB of data? This are kindergarten numbers!
KarmaStrikesThrice t1_j9vgzt4 wrote
AI is not computationally demanding to run, learning is the part that needs a supercomputer level resources for months and months, but once the neural network is complete, using it is quite simple. How else would chatGPT be able to service 100+ million users at once if each user required a whole gpu resource-wise?
Adossi t1_j9viefs wrote
PyTorch? Hello?
[deleted] t1_j9w1tg4 wrote
[deleted]
[deleted] t1_j9wkh16 wrote
Most people here don’t know how valuable Meta has been to the tech industry. They are one of the reasons developers get paid amazingly well these days.
Additional-Escape498 t1_j9wsis0 wrote
The mods don’t let you link to arxiv on a technology subreddit?
gullydowny t1_j9wv4x5 wrote
It was a pdf but the address was FB’s CDN so no go
Divided_World t1_j9x467c wrote
Curious about developers being paid well because of them. Can you elaborate at all?
0382815 t1_j9x7rbi wrote
Users per gpu is lower than one, but ChatGPT definitely does not fit on just one gpu. I’m not sure I would call it simple.
SeaRollz t1_j9xdwwa wrote
Without them, I would not have gotten my first front end developer job!
HunterofSnowmen t1_j9xe9k9 wrote
Have you ever heard of a little something called PyTorch. Give them their credit where credit is due.
malevolent_keyboard t1_j9xg8ia wrote
Probably missing some details, but back in the early-mid 2000’s, most tech companies formed a not-so-secret-anymore pact to keep pay low for developers. Facebook was the only company who said “not interested” and paid SWE’s what all the companies knew they were worth. Then the other companies lost workers to Facebook for much higher pay and benefits, forcing those companies to follow suit. This was mostly Zuck’s doing.
Cloudly-so t1_j9xsahx wrote
Will be very interesting to see if the development will be to run the models locally (on mobile, PC etc) or the need for the cloud.
Will vary by use-case. Image generation is for example fitted in to much smaller models them languages. The rout it will take will effect the tech ecosystem in many ways with someone like Apple benefitting much more on local models, and AWS, Azure etc benefiting from larger models.
Vegan_Honk t1_j9xuuyo wrote
Going a little fast there guys. Almost like you're trying not to drown in this current market.
nicuramar t1_j9xvqbs wrote
> AI is not computationally demanding to run
ChatGPT kinda is, due to the size of the neural network. But it’s all relative, of course.
KarmaStrikesThrice t1_j9y13vs wrote
But is it the size that is limiting or the performance? ChatGPT is definitely too huge for 1gpu (even the A100 server gpus with 80GB of memory), but once you connect enough gpus to have the space available, i bet you the performance is quite fast. It is similar tu human brain, it takes us days, weeks, years to learn something, but we can then access it in a split of a second. The fastest supercomputers today have tens of thousands of gpus, so if chatgpt can have millions of users running it at the same time, one gpu can have hundreds and thousands of users using it.
freediverx01 t1_j9yb1ro wrote
Meta unveils _____ , which no one with any sense should consider using, given the company’s leadership, culture, and track record.
ActuatorMaterial2846 t1_j9ydjnf wrote
Is this to do with advancements in file compression? I heard Emad Mostaque talk about this regarding stable diffusion.
RuairiSpain t1_j9yf7g4 wrote
The model is huge though and needs to be in GPU memory for performance calculations (sparse matrix dot product).
Probably one thing teams are working in is reducing the dimensions of the sparse matrix so it can fit on fewer GPUs. Also looking at reduced precision of floating point multiplication, 8 bit floats is probably enough for AI matrix maths. Maybe combining matrix multiplication AND the activation functions (typically ReLU or Sigmoid) so two maths operations can be done in one pass through GPU. That involves refactoring their math library.
Or the build custom TPUs with all this build into the hardware.
The future is bright 🌞 for AI. Until we hit the next brick wall
[deleted] t1_j9ygd8v wrote
[removed]
Total_loss_2b_boss t1_j9yhs2w wrote
I know that Facebook's BERT model is hugely useful in various AI tasks but I didn't know that they were behind pytorch.
Damn. Pytorch kind of matters a lot in AI. Like, a lot. All of the open source AI stuff I've been tinkering with uses pytorch.
KarmaStrikesThrice t1_j9zvqll wrote
No I meant it more generally. Neural networks dont contain any super complicated math and equations that are difficult to solve, it is a network of simple cells whose inputs are outputs of previous layer of cells and the output is fed to the next layer. Popular example of a cell is Perceptron, which computes a simple linear equation y=Ax+b. The main problem is the size of a network, which can be billions or even trillions of cells in case of chatgpt. But not all cells are always used, based on the input only some cells are active (the same way our brain does not activate cells that learned math when we are asked what is the capital of New York state for example).
So the most computationally difficult part is learning, and then having enough memory to store the whole network into fast memory, the AI doesnt know what you are about to ask it, so the whole network needs to be ready. But once we ask a specific question, like "are cats carnivores?", 99.99...% of cells remain inactive and only those storing information about biology, mammals, cats, food, meat, diets, carnivores, etc. are engaged and produce answer. So extracting the output based on given inputs is much simpler and can be done by personal computers (if our computers had many terabytes/petabytes of RAM and storage, which they dont)
The advanced compression alhorithms reduce the memory required to store the network, but it doesnt really improve performance aside from some minor cache optimizations.
capybooya t1_j9zzd1z wrote
That's less data than some people's cat pictures collections.
namastayhom33 t1_j9v7d46 wrote
Oh great Meta and Ai, what could go wrong.