Lesterpaintstheworld
Lesterpaintstheworld OP t1_je1lwjl wrote
Reply to comment by -I-D-G-A-F- in The subjective experience of AGIs: A thought experiment by Lesterpaintstheworld
A missing piece indeed! I'll incorporate it to the architecture
Lesterpaintstheworld OP t1_jdzcbar wrote
Reply to comment by Justdudeatplay in The subjective experience of AGIs: A thought experiment by Lesterpaintstheworld
Yes, I actually think this is a good idea.
It gets very woo-woo very fast, and the focus needs to remain solely on science / building an actual product, but when studying cognition so unconvential approaches really help. In particular, altered states of consciousness help understanding the specifics of your brain processes. From here, two mains camps: entering altered states with psychoactive substances or without.
I personally fall in camp 1: psychoactives tend to impact various parts of the brain differently, giving you a ventage point to understand the different functional components of your brain, how they interact and what purpose they serve (cf. The thousand brain theory).
I have heard that folks achieve altered state through mediating / breathing / visualiazing, but it's hard to find people that also have the technical baggage to transform the insights they get into technical elements of an Architecture for AGI. If you know people who might, I'm all ears, tell them to read this :)
Submitted by Lesterpaintstheworld t3_1245ke2 in singularity
Lesterpaintstheworld OP t1_jd59bgp wrote
Reply to comment by KerfuffleV2 in The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld
Two very good consideration indeed, thanks :)
Lesterpaintstheworld OP t1_jd33hvc wrote
Reply to comment by WonderFactory in The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld
Yep the approach overall we found was very effective. I'm wondering how long it will keep relevant, with descending prices though.
Lesterpaintstheworld OP t1_jcxnmyp wrote
Reply to comment by basilgello in The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld
Awesome thanks.
Have people seen this applied to data storage for LLMs?
Lesterpaintstheworld OP t1_jcxkukj wrote
Reply to comment by sumane12 in The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld
Usefulness: Limited, expressing shared realization and amusement.
Improvements: Provide specific insights, questions, or suggestions; contribute to discussion on semantically-compact representations or related topics.
Lesterpaintstheworld OP t1_jcxhkzj wrote
Reply to The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld
GPT-4 Salient Summmary:
[Post: r/singularity, Author: u/Lesterpaintstheworld, Topic: Semantically-compact representations, LLM internal language]
- Video: David Shapiro, Sparse Priming Representations, context priming for LLM performance
- High Source Compression Algorithm: remove stop words, shortest summary retaining meaning and context
- Observation: Prompt-chaining in Cognitive Architectures leads to semantically-denser text representations
- Speculation: Salient summaries, optimal language form for Cognitive Architectures?
- Example: AI-generated conversation demonstrates semantically-compact representation
- Question: Is this an efficient language for LLMs?
Submitted by Lesterpaintstheworld t3_11wddua in singularity
Lesterpaintstheworld t1_ja9kvcn wrote
Is this written by ChatGPT? Definitely sounds like it 😄
Lesterpaintstheworld OP t1_ja402r0 wrote
Reply to comment by AsheyDS in Raising AGIs - Human exposure by Lesterpaintstheworld
One of the difficulties have been sewing together different types of data (text & images, other percepts, or even lower levels). I wonder what approaches could be relevant
Lesterpaintstheworld t1_ja3w8h5 wrote
Reply to comment by HeinrichTheWolf_17 in Brace for the enshitification of AI by Martholomeow
Several projects are in the way in this direction
Lesterpaintstheworld OP t1_ja3rt9d wrote
Reply to comment by AsheyDS in Raising AGIs - Human exposure by Lesterpaintstheworld
Thanks for the answers. What alternatives do you have from LLMs? The single GPU is interesting Indeed, it would allow me to let it run 24/7
Lesterpaintstheworld OP t1_ja2ntuq wrote
Reply to comment by IluvBsissa in Raising AGIs - Human exposure by Lesterpaintstheworld
Never thought of this as an option, thanks
Lesterpaintstheworld OP t1_ja2nq8n wrote
Reply to comment by turnip_burrito in Raising AGIs - Human exposure by Lesterpaintstheworld
Whoo, forks & merges, with a consensus layer. I like that
Lesterpaintstheworld OP t1_ja2mobc wrote
Reply to comment by turnip_burrito in Raising AGIs - Human exposure by Lesterpaintstheworld
At this stage this is actually surprisingly easy. People have to intentionally be very manipulativr and creative to get ChatGPT to "behave badly" now. Without those "bad actors", this behavior would almost never happen.
One easy way to do that is to preface each prompt with a reminded of values / objectives / personality. Every thought is then colored with this. The only moment I had alignment problems is when I made obvious mistakes in my code.
I'm actually working on making the ACE like me less, because he has a tendency to take everything I say as absolute truths ^^
Submitted by Lesterpaintstheworld t3_11ccqjr in singularity
Lesterpaintstheworld OP t1_j9y8v2e wrote
Reply to comment by IluvBsissa in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
My project is an implementation of the "cognitive architecture" approach to intelligence. It postulate that what's missing to get to AGI is not only scale (OpenAI's current approach), but a layer of logic and memory. David Shapiro makes a better job than me of explaining this approach on YouTube if this is interesting for you
Lesterpaintstheworld OP t1_j9y5xyj wrote
Reply to comment by IluvBsissa in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Not a researcher, but engineer. I do have GitHub but my previous works were closed-source. Why?
Lesterpaintstheworld OP t1_j9xyx13 wrote
I'm open to constructive criticism, especially because I'm not from a ML background. I do have an engineering degree in CS, but there will definitely be gaps in my knowledge.
Lesterpaintstheworld OP t1_j9xyrle wrote
Reply to comment by Desi___Gigachad in The Road to AGI: Building Homebrew Autonomous Entities by Lesterpaintstheworld
My opinion is that people can become proficient in almost any field nowadays using the many resources on the internet. There's a ton of free intro/courses/tutorials on CS and ML.
Lesterpaintstheworld OP t1_j9xy2rg wrote
Reply to comment by IluvBsissa in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Yes, a better term is "ACE" (Autonomous Cognitive Agent), AGI having a tendency to mean "Whatever computers can't do yet"
Lesterpaintstheworld OP t1_j9w529n wrote
Reply to comment by MrTacobeans in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
The engine to generate token can be changed at any moment. Actually I'm looking forward to being a able to plug it on GPT 3.5 / 4. Also it could be replaced by an open-source counterpart, I am just not aware of any at the moment.
I think no one really knows we're AGI will emerge from. But even having an agent that can be an helpful assistant, even without the "AGI" part, would be quite the success for me. Business applications are numerous
Lesterpaintstheworld OP t1_j9w4h1h wrote
Reply to comment by DamienLasseur in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Sure, feel free to reach out! No training required on my side, I'm only leveraging existing API. I even did not require fine-tuning yet, although that might come
Lesterpaintstheworld t1_je5gor1 wrote
Reply to Facing the inevitable singularity by IonceExisted
Personally I'll neuralink / upload my ass to keep with discovering of the universe.
Thanks for the text though, it was a great read :)