RedditLovingSun
RedditLovingSun t1_jdziqh1 wrote
Reply to comment by deepneuralnetwork in [D] FOMO on the rapid pace of LLMs by 00001746
Me too, I've used it to aid me reading books, study for tests, complete some small side projects, etc. Wish there was a list or subreddit somewhere for people to share what ways they've gotten value out of it so far
RedditLovingSun t1_jdtnafr wrote
Reply to comment by moonpumper in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
Can't wait till we get there with a better alpaca model + local transcription and audio generation + chatgpt style plugins for operating apps. All possible today we just have to wait for it to be developed
RedditLovingSun t1_jdtn0z9 wrote
Reply to comment by sumane12 in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
It looks like from the title bar he's using whisper api for transcribing his audio to a text query. That has to send a API request with the audio out and wait for the text to come back over the internet. I'm sure a local audio text transcriber would be considerably faster
Edit nvm whisper can be run locally so he's probably doing that
RedditLovingSun t1_jdipxex wrote
Reply to comment by zxyzyxz in [D] What is the best open source chatbot AI to do transfer learning on? by to4life4
They aren't open source but didn't Stanford release their code and self instruct training data that's supposedly only $600 to train? I honestly don't know but how enforceable is llamas "no using it for business" clause after someone augments one of their models with Lora and trains weights on self instruct?
RedditLovingSun t1_jdfds8b wrote
Reply to comment by signed7 in [N] ChatGPT plugins by Singularian2501
I'm optimistic, between the hardware and algorithmic advances being made
RedditLovingSun t1_jdemr0b wrote
Reply to comment by nightofgrim in [N] ChatGPT plugins by Singularian2501
That's awesome I've been thinking of trying something similar with a raspberry pi with various inputs and outputs but am having trouble thinking of practical functions it could provide. Question, how did you hook the model to the smart home devices, did program your own apis that chatgpt could use?
RedditLovingSun t1_jde2yvh wrote
Reply to comment by drunk-en-monk-ey in [N] ChatGPT plugins by Singularian2501
I'm not disagreeing with you but out of curiosity can you elaborate on any factors I may have overlooked?
RedditLovingSun t1_jddyo6g wrote
Reply to [N] ChatGPT plugins by Singularian2501
I can see a future where apple and android start including apis and tools/interface for LLM models to navigate and use features of the phone, smart home appliance makers can do the same, along with certain web apps and platforms (as long as your user is authenticated). If that kind of thing takes off so businesses can say they are "GPT friendly" (same way they say "works with Alexa") or something we could see actual Jarvis level tech soon.
Imagine being able to talk to google assistant and it's actually intelligent and can operate your phone, computer, home, execute code, analyze data, and pull info from the web and your google account.
Obviously there are a lot of safety and alignment concerns that need to be thought out better first but I can't see us not doing something like that in the coming years, it would suck tho if companies got anti-competitive with it (like if google phone and home ml interfaces are kept only available to google assistant model)
RedditLovingSun t1_jd3nidx wrote
Reply to comment by yahma in [P] OpenAssistant is now live on reddit (Open Source ChatGPT alternative) by pixiegirl417
I don't think they can use llama cause of the limited open source rule fb put in llama. Wouldn't be as entirely open as pythia
RedditLovingSun t1_jbz78cm wrote
Reply to comment by MinaKovacs in [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
Depends on your definition of intelligence, the human brain is nothing but a bunch of neurons passing electrical signals to each other, I don't see why it's impossible for computers to simulate something similar to achieve the same results as a brain does.
RedditLovingSun t1_je5m80p wrote
Reply to [D] llama 7b vs 65b ? by deck4242
Significantly better
I guess it would be interesting to see if the performance difference gets wider or narrower after self-instruct optimizations like alpaca