Comments

You must log in or register to comment.

Educational_Ice151 t1_jd2qb1v wrote

Alpaca is mind blowing. Thanks for sharing.

Shared to r/aipromptprogramming

6

ObiWanCanShowMe t1_jd3gyo9 wrote

Can I get some help here on windows? It just sits at loading models. All requirement are installed, I have the models already, just not sure where to put them?

D:\alpaca\Alpaca-Turbo>python webui.py can't load the settings file continuing with defaults Loading Model

━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:--

1

Puzzleheaded_Acadia1 t1_jd3rsnn wrote

I want to ask what is you PC specs and on what dataset did you train it on an how did it take to train it

1

viperx7 OP t1_jd45c0r wrote

I haven't trained it i just created this ui that modifies the prompt in a way to get the bot to respond like that

If you can run dalai on your system you can run this also
the only thing is on windows it's performance is very very bad linux and mac works good with 8 gb ram nd 4 threads

3

AllHip t1_jd46tyj wrote

I'm trying to test it out but I'm getting the error message: "can't load the settings file continuing with defaults
Loading Model ━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13% -:--:--" It's stuck at 13%. Any ideas for how to fix it? (On Win 11)

2

FermatsLastAccount t1_jd4sxbe wrote

I'm having the same issue on Linux on a 16 thread Ryzen 5850U with 64 GB of RAM.

➜  Alpaca-Turbo git:(main) python3 ./webui.py

can't load the settings file continuing with defaults
Loading Model ━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  13% -:--:--

It just stays at 13%.

1

GreatBigJerk t1_jd5m3c1 wrote

Are there plans to add support for the 13b version of the model?

2

gunbladezero t1_jd6620b wrote

This interface looks great! I'm using windows though and I can hear my fan speed up to run the model, nothing happens. I can run the model through cocktail peanut Dalai Alpaca, so any clue what's going on? Thank you! Edit: It generated text eventually. But really really slow... (about 4 and half minutes to return a response :-p )

1

GrapplingHobbit t1_jd98i9i wrote

Hoping you manage to figure out what is slowing things down on windows! In the direct command line interface on the 7b model the responses are almost instant for me, but pushing out around 2 minutes via Alpaca-Turbo, which is a shame because the ability to edit persona and have memory of the conversation would be great.

1