Hands0L0
Hands0L0 t1_jdsvv9h wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
You and everyone else here
Hands0L0 t1_jdrrtem wrote
Reply to comment by [deleted] in An 'extremely dangerous tornado' strikes Georgia as 20 million Southerners are at risk of treacherous weather Sunday by xdeltax97
Not funny
Hands0L0 t1_jcknz03 wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
Study CS and come up with a solution and you can be very rich
Hands0L0 t1_jckbm7h wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
No, because the predictive text needs the entire conversation history context to predict what to say next, and the only way to store the conversation history is in RAM. If you run out of RAM you run out of room for returns.
Hands0L0 t1_jck7ifi wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
Not if there is a token limit.
I'm sorry, I don't think I was being clear. The token limit is tied to VRAM. You can load the 30b on a 3090 but it shallows up 20/24 gb of VRAM for the model and prompt alone. That gives you 4gb for returns
Hands0L0 t1_jck5cyd wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
Considering you can explore context with ChatGPT and bing through multiple returns, not exactly. You need to hit it on your first attempt
Hands0L0 t1_jck3lfv wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
Depends on prompt size which is going to dictate that quality of the return. 300 tokens?
Hands0L0 t1_jck1yvf wrote
Reply to comment by Akimbo333 in Those who know... by Destiny_Knight
I got 30b running on a 3090 machine, but the token return is very limited
Hands0L0 t1_jck1kg0 wrote
Reply to comment by liright in Those who know... by Destiny_Knight
Llama is a LLM that you can download and run on your own hardware.
Alpaca is, apparently, a modification of the 7b version of Llama that is as strong as GPT-3.
This bodes well for having your own LLM, unfiltered, run locally. But still, progress needs to improve.
Hands0L0 t1_j9r49i6 wrote
Reply to comment by Nano-Brain in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
I think you may be overstating human creativity. There are plenty of visionaries among us who create new concepts, but the vast many of us are -boring-. We share the same memes and when we try to make our own memes they fall flat. How many people do you know have tried to write a book, and it ends up being rife with established tropes? How many hit songs use the same four chord progression? When was the last time you experienced something -truly- unique? It's been a long time for me, that's for sure.
So I don't think "making something totally unique" is the best metric for AGI. Being able to infer things? That's where I'm at. But I'm not an expert, so don't take what I'm claiming as gospel
Hands0L0 t1_j9qy2ad wrote
Reply to comment by Nano-Brain in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
I mean, not every human has the creativity to create new things. But that doesn't mean they aren't intelligent
Hands0L0 t1_j9pzdya wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
I feel like the best metric I can think of that is totally feasible is this: When we are able to show an AI a video without dialogue, with all of the concepts being delivered strictly by how human actors are interacting in the video, if the AI is able to tell you all about the video in precise detail, we're right there. I honestly think this isn't very far off (10-20 years). There's plenty of Python APIs that are able to detect what objects are in live video, the next step is understanding interactions and once it can comprehend something that it itself can't ever reproduce, AGI is imminent.
Hands0L0 t1_j9jqo4a wrote
Reply to comment by fumblesmcdrum in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
Fuck dude, that's clever
Hands0L0 t1_j9i277j wrote
Reply to comment by drekmonger in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
I for one welcome competition in the race to AGI
Hands0L0 t1_j9b9dwi wrote
Reply to Just 50 days into 2023 and there's so much AI development. Compiled a list of the top headlines. by cbsudux
A lot of these are not ai development. Disappointing if this is all that is measured as progress
Hands0L0 t1_j8ll9y4 wrote
Reply to comment by intersecting_lines in Romania detects suspicious weather balloon in its airspace, ministry says by itsfinenevermind
The Rivet Joint was probably observing the Ukraine War, no?
Hands0L0 t1_j7pdcrw wrote
Reply to comment by grossexistence in AI Progress of February Week 1 (1-7 Feb) by Pro_RazE
Not until we understand our own brains
Hands0L0 t1_j01xkig wrote
I think you are going to see a lot of companies releasing AI software who helps you do your jobs. It will provide suggestions but AI won't be able to do the job for you. Let's say someone sends you an e-mail asking about a project. The AI will be to read the e-mail, look at your calendar and details about the project, and provide suggestions for an e-mail response.
Hands0L0 t1_ixf8kbo wrote
Reply to comment by BinyaminDelta in Neuralink Co-Founder Unveils Rival Company That Won't Force Patients To Drill Holes in Their Skull by Economy_Variation365
Drill a hole in my head
Hands0L0 t1_iud89h0 wrote
Reply to comment by Down_The_Rabbithole in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
This
Hands0L0 t1_iuc3tg9 wrote
Reply to comment by PM_ME_UR_ETHDONATION in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
I mean I would have to do some editing, sure. But the actual art?? I can command a computer to produce whatever I want
Hands0L0 t1_iuavjc5 wrote
I've been toying around with Simple Diffusion and I feel like I could make my own Manga with some of the models that are out there, completely auto generated. Like, this is insane the quality that ai art is pumping out with simple text prompts and editing weights
Hands0L0 t1_je6o5t5 wrote
Reply to comment by Im_Unlucky in [D] The best way to train an LLM on company data by jaxolingo
Off topic but I love how underpants gnomes memes are still relevant 25 years later