Viewing a single comment thread. View all comments

Veei t1_jdtvpsg wrote

Can it fully run locally? I thought the AI could but the TTS still needs to contact inet app via API? That’d be awesome if not true.

3

Anjz OP t1_jdtvzt8 wrote

Fully local! Not as good as inferring compared GPT-4 or as fast... yet. But it's very functional and does not require internet.

5

No_Nefariousness1441 t1_jdtw43l wrote

How can I set this up, did you follow a guide?

2

audioen t1_jduat9o wrote

https://rentry.org/llama-tard-v2 contains bunch of the requisite torrent links, though much of it is disorganized information and multipurpose, e.g. do this if you want that, use these if you are on Windows but these if on Linux, and so forth. It is a mess.

I have llama.cpp built on my Linux laptop and I got some of these quantized model files and have installed bunch of python libraries required to run the conversions from the various formats to what llama.cpp can eat (model files whose names start with ggml- and end with .bin). I think it takes some degree of technical expertise right now if you do it by hand, though there is probably prebuilt software packages available by now.

3

AnOnlineHandle t1_jdunceg wrote

Do you know of anywhere to see examples of what those local models are capable of doing?

1

Puzzleheaded_Acadia1 t1_jdvrrty wrote

PLEASE tell me how to set it up on Ubuntu i tried every YouTube video and websites but I did not find anything for it pls help. and you have to download python libraries for it to work? and do i need an ide because I saw some YouTubers do alot of code in Google colab and no i don't want to run it on google colab i want something offline that can provide me with information and I can expirement with it

1

Veei t1_jdtw5m0 wrote

That’s way cool. Horizon New Dawn type reboot of civilization, heh. Are there any guides as to how to set up local assistant and TTS, etc?

1