inglandation
inglandation t1_jecxwxl wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
What happens when we run out of camelids to name those models?
inglandation t1_jdjvmqe wrote
Reply to comment by Username2upTo20chars in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
> you can't model one bit with it, it has no predictive power and it kind of shuts down discussions.
For now yes, my statement is not very helpful. But this is a phenomenon that happens in other fields. In physics, waves or snowflakes are an emergent phenomenon, but you can still model them pretty well and make useful predictions about them. Life is another example. We understand life pretty well (yes there are aspects that we don't understand), but it's not clear how we go from organic compounds to living creatures. Put those molecules together in the right amount and in the right conditions for a long time, and they start developing the structures of life. How? We don't know yet, but it doesn't stop us from understanding life and describing it pretty well.
Here we don't really know what we're looking at yet, so it's more difficult. We should figure out what the structures emerging from the training are.
I don't disagree that LLMs "just" predict the next token, but there is an internal structure that will pick the right word that is not trivial. This structure is emergent. My hypothesis here is that understanding this structure will allow us to understand how the AI "thinks". It might also shed some light on how we think, as the human brain probably does something similar (but maybe not very similar). I'm not making any definitive statement, I don't think anyone can. But I don't think we can conclude that the model doesn't understand what it is doing based on the fact that it predicts the next token.
I think that the next decades will be about precisely describing what cognition/intelligence is, and in what conditions exactly it can appear.
inglandation t1_jdijeu5 wrote
Reply to comment by omgpop in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
> One way to get really good at approximating what a human would likely write given certain information would be to actually approximate human cognitive structures internally.
Yes, I hope that we'll be able to figure out what those structures are, in LLMs and in humans. It could also help us figure out how to align those models better if we can create more precise comparisons.
inglandation t1_jdij4o8 wrote
Reply to comment by Username2upTo20chars in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
> why should the next generation be fundamentally different?
Emergent abilities from scale are the reason. There are many examples of that in nature and many fields of study. The patterns of snowflakes cannot easily be explained by the fundamental properties of water. You need enough water molecules in the right conditions to create the patterns of snowflakes. I suspect that a similar phenomenon is happening with LLMs, but we haven't figured out yet what the patterns are and what are the right conditions for them to materialize.
inglandation t1_j9x44mf wrote
> What do people not understand about exponential growth?
A lot.
inglandation t1_j8gs4p7 wrote
Reply to comment by imnos in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
I've seen a poker player who become the CTO of a biotech company recently. The Silicon Valley is a wild, wild place.
inglandation t1_j4vif1l wrote
Reply to comment by wavefxn22 in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
Sure, but the OP's picture tries to compare AI models with the various functions of the brain.
Copying biology is not necessarily the way.
inglandation t1_j4rinpr wrote
Reply to comment by No_Ninja3309_NoNoYes in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
It's also completely ignoring the fact that there are two hemispheres that fulfill different functions, which is a very important feature of central nervous systems found in nature.
inglandation t1_j2lfjte wrote
Reply to comment by bad_horsey_ in A Drug to Treat Aging May Not Be a Pipe-Dream by Mynameis__--__
I'm tired of this argument. Many of the drugs investigated are already very cheap and available (metformin, NMN, fisetin, to name a few popular ones that you can buy easily), and scaling production to the entire population will be immensely more profitable than selling it to a few wealthy people.
inglandation t1_j1ocj8r wrote
Reply to comment by the68thdimension in Will ChatGPT Replace Google? by SupPandaHugger
" But despite Pichai’s casual claim that his AI “understands” many topics, language models do not know what they are saying and cannot reason about what their words convey."
I've seen this before, but I've never found this convincing. How can the author be so sure of that, since we don't even know how reasoning and understanding work in the human mind?
inglandation t1_is68ong wrote
Reply to comment by Batuhan_Y in [Project] I've built an Auto Subtitled Video Generator using Streamlit and OpenAI Whisper, hosted on HuggingFace spaces. by Batuhan_Y
Okay, thanks! Very useful app btw. I'd be nice if I could somehow replace the autogenerated YouTube subtitles by those. They're much better.
inglandation t1_is61d3u wrote
Reply to comment by Batuhan_Y in [Project] I've built an Auto Subtitled Video Generator using Streamlit and OpenAI Whisper, hosted on HuggingFace spaces. by Batuhan_Y
Okay, thanks! I'd run it locally but it looks like it would be a bit much for my computer.
inglandation t1_is5ym5t wrote
Reply to comment by Batuhan_Y in [Project] I've built an Auto Subtitled Video Generator using Streamlit and OpenAI Whisper, hosted on HuggingFace spaces. by Batuhan_Y
Did you manage to solve this error? I'm getting it too. My video is unlisted and lasts 10:41 minutes.
I tried switching to the large model but it takes forever, it's still running.
inglandation t1_jedaonh wrote
Reply to comment by IONaut in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
Blade Runner had that too.