Nmanga90
Nmanga90 t1_jddrubj wrote
Reply to comment by jenglasser in Bravery medals for women who raced into 'rough, crazy' surf to save drowning girls by Sariel007
Definitely also starts in the kitchen. You’re pretty hard limited on your ability to get more fit by the amount of fat on your body
Nmanga90 t1_jadrtdy wrote
Reply to comment by just-a-dreamer- in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Depends if people are investing money to make an AI related to that field but yeah that is the case
Nmanga90 t1_jadq4eh wrote
Reply to When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
Right now is that time. Time and cost are on a sliding scale with ML. The more money you commit, the faster you can train AI. As it is, an AI can be finetuned on basically the entirety of the worlds knowledge of a specific subject in a month with (relatively) significant monetary investment
Nmanga90 t1_jabgvv5 wrote
Reply to comment by Donkeytonkers in Snapchat is releasing its own AI chatbot powered by ChatGPT by nick7566
Idk if you know this but Microsoft basically owns OpenAI (ChatGPT / GPT3) already and has for a while
Nmanga90 t1_ja9ywnj wrote
Reply to comment by gcaussade in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
Well not necessarily though. This could be accomplished in 50 years without killing anyone. Demographic transition models only have relevance with respect to labor, but if the majority of labor was automated, it wouldn’t matter if everyone only had 1 kid.
Nmanga90 t1_j9dpcsa wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
What the fuck? No. I hope this technology never comes to fruition
Nmanga90 t1_j8enw5m wrote
Heeeey you guys should try bringing down the price of a mid tier mobo from $270
Nmanga90 t1_j8ar7ic wrote
Reply to comment by Art10001 in Recursive self-improvement (intelligence explosion) cannot be far away by Kaarssteun
Not right now it won’t. We already know of ways to improve AI, but we don’t have data to allow an AI to improve itself. The only way for that would be generative design, which is by nature very wasteful and slow. Once it gets to a certain point yes, but as of right now we are far (relatively) from that
Nmanga90 t1_j87t99b wrote
It is I promise. We have barely broken the surface of what AI can do with human designs. Right now the main limitations are compute power and data. Any explorations into alternative architectures and whatnot come with a massive opportunity cost because of these.
OpenAI alone has probably spent a billion on compute up to now. Insanity
I don’t think you guys understand, but every single week we’re improving leaps and bounds with minor tweaks and modifications to existing architecture. It would be extremely inefficient to allow the AI to try and improve itself when we have almost guaranteed improvement by humans that is only limited by how much GPU we can muster
Nmanga90 t1_j78du1g wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Just out of curiosity, what is your education on the subject? I find it kind of strange or I guess inconsistent that you’re talking about multimodal LLMs and their necessity, but don’t know about OPT, InstructGPT, or why an Instruct model would be better than a predictive model
Nmanga90 t1_j785o7a wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Haha, AWS actually just released one of these 2 days ago that’s waaaaay smaller but actually outperforms GPT-3 on reasoning tasks.
Here is the link: https://arxiv.org/abs/2302.00923
Nmanga90 t1_j784rkz wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
What exactly don’t you understand?
Following instructions makes it better because these models are by nature predictive. They don’t understand what you are saying, and are created to predict the next text after the input. By nature, the models basically have an implicit prompt that says “what follows this input:”. This is much less useful than following instructions, because in the real world, there is less money/productivity to be gained by predicting the next text sequence, and more to be gained by completing tasks that you ask it to.
Nmanga90 t1_j77zkpf wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
InstructGPT is GPT-3 fine tuned to follow instructions, and is now the flagship GPT3, and the newest davinci model is instructGPT. ChatGPT is based on instructGPT and further fine tuned for dialog.
Nmanga90 t1_j77vdw4 wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Not exactly but close. ChatGPT is instructGPT fine tuned for dialog. You could make your own version, but it would be pretty expensive
Nmanga90 t1_j77nr39 wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Like 6 months ago or so. They have also announced plans to open source the “instruct” model for OPT-30B and 175B I think in the next 2 months
Nmanga90 t1_j75uk2j wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Nmanga90 t1_j75hc66 wrote
Reply to comment by Akimbo333 in Infinite police by crap_punchline
Meta has open sourced copies of GPT-3 that are up to 175B
Nmanga90 t1_j742853 wrote
No it’s too late lol. It’s advanced to the point where very accurate 3d models can be made from 2 pictures. Look up NVIDIA NERF
Nmanga90 t1_j6b2rkf wrote
Reply to comment by questionasker577 in Why did 2003 to 2013 feel like more progress than 2013 to 2023? by questionasker577
Definitely an S curve still. Looking back at progress, it has been very, very slow until now. Basically the invention of the transformer changed everything
Nmanga90 t1_j5uolx8 wrote
Reply to [D]Are there any known AI systems today that are significantly more advanced than chatGPT ? by Xeiristotle
Google definitely does. Chat GPT is based off of GPT3-175B, and google has put out several models that outperform it. Like 4 or 5 I think and each of those significantly outperforms it’s predecessor
Nmanga90 t1_j3xxwj6 wrote
Reply to comment by learningmoreandmore in [D] I want to use GPT-J-6B for my story-writing project but I have a few questions about it. by learningmoreandmore
Locally will not cut it unless you have a high performance computer with lab grade GPUs for inference. The reason the AI models are so expensive to use is because they are actually pretty expensive to run. They are running probably 2 parallel versions of the model on a single a100, and have likely duplicated this architecture 10,000 times. And an a100 is 10 grand used, 20 grand new. You can also rent them out for about $2 per minute.
Nmanga90 t1_j3xxi44 wrote
Reply to comment by learningmoreandmore in [D] I want to use GPT-J-6B for my story-writing project but I have a few questions about it. by learningmoreandmore
OpenAI is not going to close shop any time soon. Not sure if you know this, but Microsoft has been making huge investments into them, and has licensing rights to the GPT models. So Microsoft is pretty much the one who is serving the APIs, and they are right now looking into making another 10-billion-dollar investment into OpenAI.
Nmanga90 t1_j280vwk wrote
Nmanga90 t1_j233u80 wrote
Reply to comment by Tattoosnscars in Which celebrities, if they weren't famous, would have next to no luck on Tinder? by [deleted]
Nah bro she’s is extremely hot
Nmanga90 t1_jdeg4wz wrote
Reply to comment by Dorocche in Bravery medals for women who raced into 'rough, crazy' surf to save drowning girls by Sariel007
I’m talking about the average American who is pushing 30%. Likewise, terribly unhealthy rail thin people are also limited by their body fat and need to make dietary changes.