jloverich
jloverich t1_jcdnq8k wrote
They seem to be punted as soon as you have a good product you want to sell that clashes with the ethics committee. It seems like the ethecists might be a bit too ethical for businesses. Axon, which does ai work [and tasers] for the police force I believe had a bunch of their ethics team resign.
jloverich t1_j87aynr wrote
Reply to comment by Calm_Motor4162 in [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
Meta is also working on shumai which is javascript/typescript and looks like pytorch.
jloverich t1_j86oj73 wrote
Reply to [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
Fortran with better syntax I think would do it. They'd probably have to go the way of carbon and support legacy fortran, but change many other things quite a bit. Still it has matrix operations similar to numpy, whereas, carbon still has matrices as second class citizens... Agreed that there should be a better language for this than Python.
jloverich t1_j78cmkt wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
It's context window is all the planning it can do. Think of a human that has access to lots of information but can only remember the last 8000 tokens of any thought or conversation. There is no long term memory, and you can only extend that window so much. Yann lecun is correct when he says they will not bring about agi. There are many more pieces to the puzzle. It's about as dangerous as the internet or cell phone.
jloverich t1_j6g7dw9 wrote
Reply to Staying in Washington from March for maybe the majority of the year, looking for small towns with good access to Nature by SimpleDewd
Sequim, Anacortes, whidbey island
jloverich t1_j6f0aw5 wrote
Reply to Google’s MusicLM is Astoundingly Good at Making AI-Generated Music, But They’re Not Releasing it Due to Copyright Concerns by Royal-Recognition493
It didn't seem that good. I know it will get better, but astounding is the wrong word. Maybe "almost compelling elevator music", but even then, there is something no quite right.
jloverich t1_j681uu6 wrote
Reply to comment by Talkat in Google not releasing MusicLM by Sieventer
The researchers aren't interested in working in places they can't publish. There are other places that probably aren't publishing exactly what they are doing, midjourney and womba I think are examples.
jloverich t1_j5rnoxh wrote
Reply to Future-Proof Jobs by [deleted]
Baby maker
jloverich t1_j4oim54 wrote
Reply to [P] Looking for a CV/ML freelancer by bluebamboo3
Just use detectron2
jloverich t1_j3fik73 wrote
Paul Allen (co-founder of Microsoft) went to wsu
jloverich t1_j3cpmoi wrote
Reply to comment by sidney_lumet in [Discussion] Is there any alternative of deep learning ? by sidney_lumet
Unfortunately this is basically a different type of layer by layer training which doesn't perform better than end to end training in any case that I'm aware of. It also seems very similar to stacking which can be done with any type of model.
jloverich t1_j34khv0 wrote
Reply to Looking for an affordable area near Seattle by JorgeF010
Kitsap County or south whidbey island.
jloverich t1_j29eu0p wrote
Fwiw you.com already has an llm similar to chatgpt on their website.
jloverich t1_iwx0k74 wrote
Reply to comment by Martholomeow in Why Meta’s latest large language model survived only three days online by nick7566
They sound very confident when they are wrong.
jloverich t1_ivysvng wrote
Reply to Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
I think llm are likely still too expensive. I'd like to see what stabilityai can produce.
jloverich t1_iv4rm8j wrote
Poisson flow generative models
jloverich t1_jdrgd0p wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Tbh, I parrot the value and then add 5 3 times to double check. One of the other things these chatbots aren't doing is double checking what they just spoke otherwise one of their statements would be immediately followed by another, "oh, that was wrong". Instead you need to prompt them that it was wrong.