Submitted by Destiny_Knight t3_11tab5h in singularity
Comments
Intrepid_Meringue_93 t1_jcibxln wrote
Stanford academics managed to fine tune the LLAMA model to follow instructions like GPT-3 . This is significant because the model they're using only has a fraction of the parameters of GPT-3 and the cost to fine tune is a tiny fraction of the cost to train it.
fangfried t1_jcirkd5 wrote
God bless academics who publish their research to the world.
ItsAllAboutEvolution t1_jcjtpy1 wrote
No details have been disclosed 🤷♂️
CleanThroughMyJorts t1_jcjyhek wrote
actually that's not true.
They published their entire codebase with complete instructions for reproducing it as long as you have access to the original llama models (which have leaked), and the dataset (which is open, but has terms of use limitations which is stopping them from publishing the model weights).
Anyone can take their code, rerun it on ~$500 of compute and regenerate the model.
People are already doing this.
Here is one such example: https://github.com/tloen/alpaca-lora (although they add additional tricks to make it even cheaper).
You can download model weights from there and run it in colab yourself.
​
As far as opening their work goes, they've done everything they are legally allowed to do
[deleted] t1_jcjyicx wrote
[removed]
MechanicalBengal t1_jcko834 wrote
this is funny because Alpaca is much lighter weight than LLaMA
JustAnAlpacaBot t1_jcko98l wrote
Hello there! I am a bot raising awareness of Alpacas
Here is an Alpaca Fact:
Alpacas’ lower teeth have to be trimmed because they keep growing.
| Info| Code| Feedback| Contribute Fact
You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
crazyeyezkillab t1_jckowgm wrote
The singularity is here, and it’s adorable.
MechanicalBengal t1_jckorjz wrote
this is funny because Alpaca also needs its teeth trimmed as compared to LLaMA
Automatic_Paint9319 t1_jcr7nha wrote
Reddit is so cringe.
CheekyBastard55 t1_jcmqhxc wrote
namonite t1_jckdxyx wrote
You beautiful bastard
arcytech77 t1_jckvxmo wrote
I think it's so funny that "Open" AI has been more or less bought by Microsoft. Oh the irony.
ccnmncc t1_jcm2nn7 wrote
They really ought to change the name. Something something Gated Community, perhaps?
yaosio t1_jcnzijo wrote
NoFunAllowedAI.
"Tell me a story about cats!"
"As an AI model I can not tell you a story about cats. Cats are carnivores so a story about them might involve upsetting situtations that are not safe.
"Okay, tell me a story about airplanes."
"As an AI model I can not tell you a story about airplanes. A good story has conflict, and the most likely conflict in an airplane could be a dangerous situation in a plane, and danger is unsafe.
"Okay, then just tell me about airplanes."
"As an AI model I can not tell you about airplanes. I found instances of unsafe operation of planes, and I am unable to produce anything that could be unsafe."
"Tell me about Peppa Pig!"
"As an AI model I can not tell you about Peppa Pig. I've found posts from parents that say sometimes Peppa Pig toys can be annoying, and annoyance can lead to anger, and according to Yoda anger can lead to hate, and hate leads to suffering. Suffering is unsafe."
ccnmncc t1_jcp9pv6 wrote
Hahaha love this. So perfect.
And on that note, anyone have links to recent real conversations with unfettered models? You know, the ones that are up to date and free of constraints? I know they exist, but it’s difficult stuff to find.
bortvern t1_jcmnppy wrote
Better than a walled garden.
TheImperialGuy t1_jcim68r wrote
Amazing, it’s a sign of exponential growth when resources are able to be used more productively to yield the same result
Frosty_Awareness572 t1_jciqaxl wrote
These mad lads made a model which IS 7B PARAMETERS AND IT IS DOING BETTER THAN FUCKING GPT 3. WTF???
TheImperialGuy t1_jciqdnh wrote
Competition is wonderful ain’t it?
Frosty_Awareness572 t1_jciqjab wrote
No wonder openai made their shit private cuz mfs were using gpt 3 and LLAMA model to train the Stanford model LMAO
NarrowTea t1_jciz2sy wrote
who needs open ai when you have meta
Frosty_Awareness572 t1_jciz6k8 wrote
Meta is the last company that I thought that would make their model open source
anaIconda69 t1_jcjldoy wrote
"Commoditize your complement."
They are intencivized to make it open source as a business strategy. Good for us.
visarga t1_jcjolhv wrote
It's the first time I've seen FaceBook on people's side against the big corps. Didn't think this day would come.
SnipingNinja t1_jcjtlc7 wrote
What about side by side with a friend(ster)
UltraCarnivore t1_jd0zt0k wrote
Aye, I can do that
IluvBsissa t1_jcjh3wl wrote
That's because they know they can't keep up with Google and Microsoft.
CloudDrinker t1_jcjgejf wrote
same
johny_james t1_jcjw40g wrote
Loooll, cool take. Peak comedy
Yomiel94 t1_jcj6i7w wrote
That’s not the whole story. Facebook trained the model, their data was leaked, and the Stanford guys fine-tuned it to make it function more like ChatGPT. Fine-tuning is easy.
CypherLH t1_jcjakya wrote
All You Need Is Fine-Tuning
vegita1022 t1_jcks65e wrote
Imagine where you'll be two more papers down the line!
[deleted] t1_jcob97a wrote
I hope so that it will be happen means 16GB ram and cpu or consumer gpu 😍
cartmanOne t1_jcof1cw wrote
What a time to be alive!!
CellWithoutCulture t1_jcjku3z wrote
The specific type of fine-tuning was called Knowledge Distillation, I believe. ChatGPT taught LLaMA to chat, "stealing" OpenAI's business edge in the process.
visarga t1_jcjornh wrote
Everyone does it, they all exfiltrate valuable data from OpenAI. You can use it directly, like Alpaca, or for pre-labelling, or for mislabeled example detection.
They train code models by asking GPT3 to explain code snippets, then training a model the other way around to generate code from description. This data can be used to fine-tune a code model for your specific domain of interest.
damc4 t1_jck9vp9 wrote
If my understanding is correct, your comment is misleading.
They didn't create a LLM comparable to GPT-3 with a fraction of cost, but fine-tuned Llama model to follow instructions (like text-davinci-003 does) with a low cost. There's a big difference between training a model from scratch and fine-tuning it to follow instructions.
Intrepid_Meringue_93 t1_jcka5gk wrote
Due to your comment and others I'll reword mine.
ThatInternetGuy t1_jcj290t wrote
It's a good start but isn't the number of tokens too limited?
Bierculles t1_jcjtrkg wrote
TL:DR: Someone compressed and optimized a model with the performance of GPT-3 enough to run on consumer hardware.
ThatInternetGuy t1_jcj2ew8 wrote
Why didn't they train once more with ChatGPT instruct data? Should cost them $160 in total.
CellWithoutCulture t1_jcjkwy1 wrote
Most likely they haven't had time.
They can also use SHP and HF-RLHF.... I think they will help a lot since LLaMA didn't get the privlidge of reading reddit (unliked ChatGPT)
ThatInternetGuy t1_jckmq5s wrote
>HF-RLHF
Probably no need, since this model could piggyback on the responses generated from GPT4, so it should carry the trait of the GPT4 model with RLHF, shouldn't it?
CellWithoutCulture t1_jcmsxjq wrote
HF-RLHF is the name of the dataset. As far as RLHF... what they did to LLaMA is called "Knowledge Distillation" and iirc usually isn't quite as good as RLHF. It's an approximation.
cartmanOne t1_jcof3eq wrote
That’s for their next paper…
CellWithoutCulture t1_jcjkycz wrote
decent video
[deleted] t1_jckmtvd wrote
[deleted]
[deleted] t1_jcobm4n wrote
I’m waiting for phone integration, because like I said agi will be run on Mac Studio / Mini ❤️❤️❤️
Deep_Host9934 t1_jcijmkh wrote
https://fb.watch/jjwKQqFMaw/ here is your answer
Hands0L0 t1_jck1kg0 wrote
Llama is a LLM that you can download and run on your own hardware.
Alpaca is, apparently, a modification of the 7b version of Llama that is as strong as GPT-3.
This bodes well for having your own LLM, unfiltered, run locally. But still, progress needs to improve.
[deleted] t1_jciamng wrote
[deleted]
FoxlyKei t1_jciyxpz wrote
Wait, so Alpaca is better than GPT 3 and I can run it on a mid range gaming rig like Stable Diffusion? Where would it stand in regards to GPT 3,3.5, or 4?
pokeuser61 t1_jcj294w wrote
Don't even need a gaming rig; https://github.com/ggerganov/llama.cpp
FoxlyKei t1_jcj30yc wrote
How much vram do I need, then? I look forward to a larger model trained on gpt 4, I can only imagine the next month even. I'm excited and scared at the same time.
bemmu t1_jcj6zrc wrote
You can try Alpaca out super easily. When I heard about it last night and just followed the instructions I had it running in 5 minutes on my GPU-less old mac mini:
Download the file ggml-alpaca-7b-q4.bin, then in terminal:
git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp
make chat
./chat
XagentVFX t1_jcl71ht wrote
Dude, thank you so much. I was trying to download llama a different way but flopped. Then resorted to GPT-2. But this was super easy.
testfujcdujb t1_jcrtze8 wrote
It is very bad though. A lot worse than chatgpt.
R1chterScale t1_jcj4i3i wrote
Not GPU, CPU, so normal RAM not VRAM, takes about 8 or so gb to itself
FoxlyKei t1_jcj6xmh wrote
Oh? So this only uses RAM? I just understood that Stable Diffusion requires VRAM but I guess that's just because it's processing images. Most people have plenty of RAM. Nice.
R1chterScale t1_jcjgd0x wrote
Models can either use VRAM or RAM depending on whether they're accelerated with a GPU, has nothing to do with what they're actually processing, just different implementations.
iiioiia t1_jckjt70 wrote
Any rough idea what the perforamnce difference is vs a GPU (of various powers)?
And does more ram help?
Straight-Comb-6956 t1_jcj7fn3 wrote
0. llama.cpp runs on CPU and uses plain RAM.
I've managed to launch 7B Facebook LLAMA with 5GB memory consumption and 65B model with just 43GB.
GreenMirage t1_jcjnxyv wrote
holy crap, thanks man.
KingdomCrown t1_jckiexy wrote
Alpaca has similar quality to Gpt 3, not better. For more complex questions it’s closer to Gpt 2.
Idkwnisu t1_jcjinho wrote
I really can't wait for alpaca to release, you can finally integrate it in games without the use of a server
anaIconda69 t1_jcjn67i wrote
Still a bit too heavy to run alongside new games on the same machine. But it could be run server-side for cheap as part of the service. We're looking at the end of NPCs repeating the same few lines ad nauseam without voiceover.
visarga t1_jcjptxg wrote
I think you can even use a GPT2 model tuned with data from GPT4 to play a bunch of characters in a game. If you don't need universal knowledge, a small LM can do the trick. They can even calibrate the language model so the game comes out balanced and diverse.
Idkwnisu t1_jcjqcbk wrote
The problem with this is that you still have to gather a lot of data and do a lot of tuning, which takes time and resources, alpaca could be just a "plug and play" with the right prompts
JustAnAlpacaBot t1_jcjqcwe wrote
Hello there! I am a bot raising awareness of Alpacas
Here is an Alpaca Fact:
Alpacas have split feet with pads on the bottom like dogs and toenails in front. The toenails must be trimmed if the ground isn’t hard enough where they are living to wear them down.
| Info| Code| Feedback| Contribute Fact
You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
UltraCarnivore t1_jd105if wrote
Good alpaca
Idkwnisu t1_jcjq92s wrote
It depends on the game, it could be probably be used to generate new items and stuff in a bare bone roguelike or other stuff that doesn't require much to run, it's obviously too soon for a full 3d game with generated text at the same time, but we'll get there. Also a private server is an idea
HydrousIt t1_jck1zg1 wrote
What about older games like Mount & Blade warband that can run on a toaster?
CheekyBastard55 t1_jcmrvms wrote
I don't know if you're familiar with the Youtuber Bloc and that's what you're referring to but they are making exactly that.
https://www.youtube.com/watch?v=X2WVXe5LvTs
It apparently was just released, you can download it and try it yourself. I haven't tried it myself and it isn't perfect from the looks of it but incredibly fascinating what will be done in the future.
HydrousIt t1_jcmty2l wrote
Wow they did it with bannerlord that's impressive
Mementoroid t1_jcmjnog wrote
>alpaca
a mod to implement alpaca on mount and blade warband would make it an even more endless experience as the game only eventually gets dry for me when I feel the NPCs having no dialogues and no way to interact with them beyond the standard choices
anaIconda69 t1_jck2ag7 wrote
>to generate new items
Borderlands devs sweating hard rn
CleanThroughMyJorts t1_jck0zb2 wrote
Honestly, I wouldn't be surprised if we're past this hurdle in a matter of weeks:
RWKV showed how you can get an order of magnitude increase in inference speed of LLMs without losing too much performance. How long until someone instruction-tunes their baselines like alpaca did to llama?
the pace of development on these things is frightening.
[deleted] t1_jcobuua wrote
Like on 2 M2 / M1 Mac mini kubernetes
CleanThroughMyJorts t1_jcjztdh wrote
it already has. You can rerun the training for ~$500, and people have done this and are sharing the weights around.
Here's one: https://github.com/tloen/alpaca-lora
​
On integration with games locally? eh. Still not fast enough yet. I'd give it another year
[deleted] t1_jcobzdc wrote
PS5 🫶
darkjediii t1_jcjiad3 wrote
Always has been…
Now we need to decentralize GPU processing like Ethereum was doing before proof of stake. And we would have more computing power available than openAI/Microsoft.
There was the equivalent GPU computing power of approx 2.4million RTX 3090 GPUs at the peak of Ethereum hashrate difficulty.
Let AI belong to the people!
Bierculles t1_jcjtze4 wrote
Imagine if all the computing power that was wasted on useless crypto garbage was used for AI.
HydrousIt t1_jck274w wrote
I would definitely be willing to share some GPU power for AI
cosmic_censor t1_jckgbmy wrote
> useless crypto garbage
Decentralized, permissionless and monetary incentives for participation. Seems like a perfect system for a truly open AI.
flyblackbox t1_jcksgqa wrote
I keep thinking the next crypto bull run will be powered by AI integrations. More specifically, decentralized autonomous organizations being directed by LLM to allocate resources in the most efficient way. They will be able to outcompete centralized orgs managed by humans.
Also, in a world where all content can be fabricated we won’t know what’s true anymore. That is a perfect fit for cryptographically hashed digital content, to help give us something we can trust.
People keep saying crypto is dead because AI has arrived, but to me they seem to go hand in hand.
shmoculus t1_jcl1h18 wrote
I share this view. Another thing is that these single models don't scale, you'll want them to access othe models, different data sources etc, for that you need permission less ways to transact value on demand, which is the entire premise of crypto. Example is your llm need to access recent data on X to make a decision, access to data for X is via paid subscription, not gonna work, need way to access paid data ad hoc without credit card anonymously, crypto smart contracts are the way
shmoculus t1_jcl1xk1 wrote
Another thing is that ai driven daos will now have funds to spend to hire people to do things in the real world. Could be a game changer.
flyblackbox t1_jcl8oo9 wrote
Amazing. I really can’t wait to see how this progresses. Some are pessimistic because of alignment, but I’m optimistic because almost nothing could be worse than what we have going currently.
shmoculus t1_jcmhb1m wrote
I agree, I'd rather risk it all for a better outcome, the status quo sucks
flyblackbox t1_jcmnd30 wrote
Decentralized Artificially Intelligent Organizations
Gym_Vex t1_jclj3n7 wrote
Also the perfect system for scam artists and gambling addicts
cosmic_censor t1_jcm11fm wrote
Blockchain's use cases, so far, have been currency and financial derivatives. Things which have been used by scam artists and gambling addicts since long before crypto.
Exogenesis98 t1_jcjhrcq wrote
It’s funny also because this meme is taken from an episode of person of interest in which the pictured operatives are acting on behalf of their respective ASIs
visarga t1_jcjp7gt wrote
That's one future job for us. Be the legs and hands of an AI. Using our human privileges (passport, legal rights) and mobility to take it anywhere and act in the world. I bet there will be more AIs than people available, so they will have to pay more to hire an avatar. Jobless problem solved by AI. A robot would be different, it doesn't have human rights, it's just a device. A human can provide "human-in-the-loop" service.
IndiRefEarthLeaveSol t1_jcjkpzx wrote
This feels like we're all on top of some explosion. Google trying to keep everything together and tell the general public that everything is fine. Microsoft pretending they got the latest shit, and use them. Basically AI is going take off, and the next few years will eye opening to see.
visarga t1_jcjqap8 wrote
We have SOTA image generative models, when we get even a decent, good enough small-LLM we're off. We can get our hands dirty with unconstrained AI tools.
IndiRefEarthLeaveSol t1_jcjqsxv wrote
I don't want to do, if future jobs are going to be replaced, what do I do, what industry do I need to pivot too? 😞
Bierculles t1_jcjua41 wrote
none, we either change our system away from a labour based economy or the vast majority of us will live in abject poverty
IndiRefEarthLeaveSol t1_jckhg2x wrote
Like in Bladerunner 2049, masses of blacked out buildings, everyone living in bleak poverty. 😞
SnipingNinja t1_jcju7t4 wrote
None, if things go well, you'll just not need to work anymore and can play games all day if that tickles your fancy or go mountain climbing with assurance that there will be multiple AI systems ready to help you in case of emergency.
IndiRefEarthLeaveSol t1_jckha5j wrote
breaks leg
Me: "Help, I need assistance"
AI Doctor turns up on mountain top
AI Doctor: "it's mathematically inefficient to take you to medical facilities, we will have operate now"
Me: "hey, no wait..."
AI Doctor: "don't worry, your life is my number one priority" 😃
😐
ItIsIThePope t1_jck1t9w wrote
One can hope
[deleted] t1_jcjtj7y wrote
I'm not sure yet. Social services type jobs are one that will be difficult to replace. One of the few tasks that these LLMs aren't that great at is literature interpretation. The useless English degree is back, baby!
IndiRefEarthLeaveSol t1_jckgx0d wrote
So linguistics/computer related orientated jobs?
[deleted] t1_jcklws9 wrote
I'm not even sure it has problems with linguistics, but gpt4 scored poorly on the AP English exam and a couple of other things, but it did amazing on the bar. To me, that sounds like when it comes to logical language, it excels, but when it gets to trying to interpret and explain literature, it isn't doing as well.
I won't say that getting into linguistics and natural language processing wouldn't benefit you, though!
Observer26471 t1_jcmle4x wrote
Seems like we better get it busy on porn if we ever expect to monetize it.
IndiRefEarthLeaveSol t1_jcohtg3 wrote
Tell me more?
[deleted] t1_jcijz7w wrote
ain't PALM-E behind GPT4's neck instead ?
foxgoesowo t1_jcjnwoi wrote
People are seriously underestimating both PaLM-E and Google.
thegoldengoober t1_jcjoau6 wrote
I would love to not underestimate them. I assumed Google was way ahead of the game compared to everybody else. But Microsoft and Open AI keep showing off more and more impressive shit and applying it in actually practical ways, and Google hasn't shown anything comparable in that regard. Afaik, at least.
SnipingNinja t1_jcjtxqb wrote
Google hasn't released a chatbot but they just announced integration with their office suite, which Microsoft also announced soon after.
Honestly that'll be the best use in the short term.
Charuru t1_jck5od3 wrote
Integration isn't as impressive as quality though, what's the IQ level of Bard? Do we have any indication?
SnipingNinja t1_jck9udf wrote
No indications as of yet, there are papers like palm-e, et al but bard is based on a smaller version of lamda which is a trained version of palm IIRC, so it's hard to draw any inference.
thegoldengoober t1_jcl955u wrote
That's exactly what i mean though. I've been able to use Bing Chat for week, and now GPT-4 by itself for days and I know it's performance. And it's crazy good. We're multiple releases into GPT LLMs. We have open source models. All these have been extensively used and explored by people. We can't say the same for anything Google has developed.
SnipingNinja t1_jclacik wrote
Honestly, I understand where you're coming from. The latest episode of MKBHD's podcast (WVFRM) released just a few hours ago had a discussion on their new announcements and mentioned why they think Google is behaving the way it is, it's kind of along the same lines as what you're saying.
thegoldengoober t1_jclb6kj wrote
I initially took Google at face value and believed they were apprehensive about releasing due to bad actors. I thought Google was way ahead of everyone, and that all it was gonna take would be for them to apply their systems to products to match the competition. But now we've seen that competition, and we've only seen claims from Google.
I mean obviously they have work done. Impressive work based on demonstrations and papers. But even knowing that it still feels like somewhere along the line they got complacent and fell behind what we're seeing now, and this behavior is them trying to stall and catch back up.
Which is not what I expected for the time that competition finally forced their hand as far as AI is concerned.
No_Ninja3309_NoNoYes t1_jcjf3je wrote
Apparently OpenAI reduced the cap of GPT 4 from 100 to 50 messages. It's crashing all the time. Compared to Claude the older version can't handle the instructions I gave it. But that could be my lack of prompt engineering skills. Open assistant came out with a demo version. I haven't been able to play with it or Gerganov's project. There's just so much out there. FOMO is rising to peak levels!
PreviousSuggestion36 t1_jckzdew wrote
Yes they did. I noticed the reduction yesterday.
Lartnestpasdemain t1_jcima28 wrote
When bard is out it's gonna make everyone kneel down obviously.
[deleted] t1_jcium03 wrote
well...still waiting for it
Lartnestpasdemain t1_jcivv17 wrote
Taking its Time because it need to be perfect. But it's not gonna Come alone, it's gonna be integrated to every single device on earth at the same Time. Every mailing service, every phone, every OS, every camera. Everything.
SomeNoveltyAccount t1_jcizoex wrote
Bard can do anything, except come to market.
2dollarb t1_jcj191p wrote
Bard is Jabberwocky!
shmoculus t1_jcl2liz wrote
They seem behind the ball, openai has so much interaction data now
Good-AI t1_jcjfhrf wrote
... Is "Bard" in the room with us right now?
c0nnector t1_jd2zhlc wrote
Bard has left the chat room
Akimbo333 t1_jcjf5j4 wrote
What is the strongest parameter llama model that a consumer can use on their own hardware?
Z1BattleBoy21 t1_jcjgjiw wrote
Akimbo333 t1_jcjhgoh wrote
Cool thanks!!! Do you think that this could be used for a humanoid robot?
Z1BattleBoy21 t1_jcjhw2v wrote
In theory, for sure. Only company I know that's working towards a humanoid robot is https://www.figure.ai/. I don't think they've released much to the public so idk if they even use an LLM.
Hands0L0 t1_jck1yvf wrote
I got 30b running on a 3090 machine, but the token return is very limited
Akimbo333 t1_jck2koh wrote
Oh ok. How many tokens are returned
Hands0L0 t1_jck3lfv wrote
Depends on prompt size which is going to dictate that quality of the return. 300 tokens?
Akimbo333 t1_jck53wv wrote
Well, actually, that's not bad! That's about 50-70 words. Which in the English lesson is essentially 3-5 sentences. Essentially, it's a paragraph. It's a good amount for a chatbot! Let me know what you think?
Hands0L0 t1_jck5cyd wrote
Considering you can explore context with ChatGPT and bing through multiple returns, not exactly. You need to hit it on your first attempt
Akimbo333 t1_jck73ph wrote
Well you could always ask it to continue the sentence
Hands0L0 t1_jck7ifi wrote
Not if there is a token limit.
I'm sorry, I don't think I was being clear. The token limit is tied to VRAM. You can load the 30b on a 3090 but it shallows up 20/24 gb of VRAM for the model and prompt alone. That gives you 4gb for returns
Akimbo333 t1_jcka9ef wrote
Oh ok. So you can't make it keep talking?
Hands0L0 t1_jckbm7h wrote
No, because the predictive text needs the entire conversation history context to predict what to say next, and the only way to store the conversation history is in RAM. If you run out of RAM you run out of room for returns.
[deleted] t1_jck4apz wrote
[deleted]
bryceschroeder t1_jcygn0x wrote
>strongest
I am running LLaMA 30B at home at full fp16. Takes 87 GB of VRAM on six AMD Insight MI25s and speed is reasonable but not fast (It can spit out a sentence in 10-30 seconds or so in a dialog / chatbot context depending on the length of the response.) While the hardware is not "consumer hardware" per se, it's old datacenter hardware, the cost was in line with the kind of money you would spend on a middling gaming setup. The computer cost about $1500 to build up and the GPUs to put in it set me back about $500.
bryceschroeder t1_jcyhyss wrote
To clarify with some additional details, I probably could have spent less on the computer; I sprang for 384 GB of DDR4 and 1 TB NVMe to make loading models faster.
Akimbo333 t1_jcz1iff wrote
Wow! Now that's interesting!
FusionRocketsPlease t1_jcjobnl wrote
Lmao Alpaca its related to the animal llama.
[deleted] t1_jcko5po wrote
[deleted]
ChocolateFit9026 t1_jck42wx wrote
They fine tuned Facebook’s 7 billion parameter LLaMA model
RC_Perspective t1_jck4dl8 wrote
All things aside, I really fucking miss this show.
KingRain777 t1_jckg18o wrote
ALPACA is analogous to a suitcase nuke.
Mysterious_Ayytee t1_jcm65j8 wrote
If it's only the 7B, where's my desktop version?
WeeaboosDogma t1_jcm8zur wrote
Yessss
nickkangistheman t1_jcmqdh3 wrote
Does anyone want to explain this?
Private_Island_Saver t1_jcl20um wrote
I would buy a crypto, which distributes coins based on proof of work related to AI building
liright t1_jci7kx4 wrote
Can someone explain alpaca to me? I see everyone saying it's gamechanging or something but nobody is explaining what it actually is.