Lesterpaintstheworld

Lesterpaintstheworld OP t1_jdzcbar wrote

Yes, I actually think this is a good idea.

It gets very woo-woo very fast, and the focus needs to remain solely on science / building an actual product, but when studying cognition so unconvential approaches really help. In particular, altered states of consciousness help understanding the specifics of your brain processes. From here, two mains camps: entering altered states with psychoactive substances or without.

I personally fall in camp 1: psychoactives tend to impact various parts of the brain differently, giving you a ventage point to understand the different functional components of your brain, how they interact and what purpose they serve (cf. The thousand brain theory).

I have heard that folks achieve altered state through mediating / breathing / visualiazing, but it's hard to find people that also have the technical baggage to transform the insights they get into technical elements of an Architecture for AGI. If you know people who might, I'm all ears, tell them to read this :)

2

Lesterpaintstheworld OP t1_jcxhkzj wrote

GPT-4 Salient Summmary:

[Post: r/singularity, Author: u/Lesterpaintstheworld, Topic: Semantically-compact representations, LLM internal language]

- Video: David Shapiro, Sparse Priming Representations, context priming for LLM performance

- High Source Compression Algorithm: remove stop words, shortest summary retaining meaning and context

- Observation: Prompt-chaining in Cognitive Architectures leads to semantically-denser text representations

- Speculation: Salient summaries, optimal language form for Cognitive Architectures?

- Example: AI-generated conversation demonstrates semantically-compact representation

- Question: Is this an efficient language for LLMs?

5

Lesterpaintstheworld OP t1_ja2mobc wrote

At this stage this is actually surprisingly easy. People have to intentionally be very manipulativr and creative to get ChatGPT to "behave badly" now. Without those "bad actors", this behavior would almost never happen.

One easy way to do that is to preface each prompt with a reminded of values / objectives / personality. Every thought is then colored with this. The only moment I had alignment problems is when I made obvious mistakes in my code.

I'm actually working on making the ACE like me less, because he has a tendency to take everything I say as absolute truths ^^

4

Lesterpaintstheworld OP t1_j9y8v2e wrote

My project is an implementation of the "cognitive architecture" approach to intelligence. It postulate that what's missing to get to AGI is not only scale (OpenAI's current approach), but a layer of logic and memory. David Shapiro makes a better job than me of explaining this approach on YouTube if this is interesting for you

4

Lesterpaintstheworld OP t1_j9w529n wrote

The engine to generate token can be changed at any moment. Actually I'm looking forward to being a able to plug it on GPT 3.5 / 4. Also it could be replaced by an open-source counterpart, I am just not aware of any at the moment.

I think no one really knows we're AGI will emerge from. But even having an agent that can be an helpful assistant, even without the "AGI" part, would be quite the success for me. Business applications are numerous

7