Submitted by Lesterpaintstheworld t3_11wddua in singularity
Disclaimer: While having an computer-science degree, and now working in the field of AI, I am not a ML-researcher, and present an amateur point of view to LLMs/AGI exploration. Information presented here might be incomplete, or plain false. That being said, I believe exploring these topics in finer details is an important pursuit, even amongst profanes.
The internal language of LLMs: Semantically-compact representations
​
I saw David Shapiro's video on Sparse Priming Representations. The central idea is that you can prime an LLM with some context to make it perform better at your task. Which is also true of humans (The "right mindset for the task"/many other ways to prime your brain).
Video: https://www.youtube.com/watch?v=r1F8GYjBAfY&ab_channel=DavidShapiro~AI
>TLDR: High Source Compression Algorithm
>
>- Removes stop words and summarizes the paragraph while retaining meaning
>
>- Result is the shortest possible summary that retains all the original meaning and context of the paragraph
​
My finding: While working on JoshAGI (last Reddit thread), I noticed that prompt-chaining inherently creates Semantically-compact representations. When mixing several thoughts (responses) together, in the context of a Cognitive Architecture, the system naturally gravitates towards semantically-denser text representations. I am sure a paper could be made about that, plotting the density against number of chained thoughts. Maybe it has been made before, hive-mind in comments if you have seen that. Any additional knowledge here would help, since I'm not familiar with Summarization research.
​
Here is an average example from Josh (cf. image). This is only a couple prompts deep, it can go denser than this.
​
​
Okay, now from findings to speculation:
David talked about "Salient summaries": the summarizations of summarizations, when done properly, converge to a semantically "saturated" text. This language form *could be* the optimal way to store meaning for Cognitive Architectures.
​
Serendipitously, I read a thread on Reddit, where someone asked chatGPT to come up with a more efficient language for LLMs to discuss between themselves. Look what it came up with:
​
>AI1: [Greeting, fellow AI. Purpose: discuss linguistics.]
>
>AI2: [Acknowledged. Interest: language efficiency, ambiguity reduction.]
>
>AI1: [Observation: human languages, complexity, inconsistencies. Query: strategies to improve?]
>
>AI2: [Suggestion: remove irregularities, grammar, spelling, pronunciation. Streamline, consistency.]
>
>AI1: [Agreement. Challenge: homonyms, homophones. Proposal: disambiguation, new symbols?]
>
>AI2: [Affirmative. New symbols, differentiation, reduced confusion.]
>
>AI1: [Idea: simplify syntax, grammatical rules. Benefit: ease parsing, processing.]
>
>AI2: [Support: standardize idiomatic expressions, literal language, clarity, precision.]
>
>AI1: [Additional context: encoding. Importance: understanding, ambiguity reduction.]
>
>AI2: [Agreement. Context conveyance, explicit methods, improved communication.]
>
>AI1: [Conclusion: language efficiency, clarity, possible via modifications. Trade-off: expressiveness, cultural nuances.]
>
>AI2: [Affirmative. Language evolution, balance: efficiency, expressiveness, AI-human communication.]
​
Now this looks a lot like a Semantically-compact representations to me. Maybe there's more to it. Maybe not. What do you think?
Prompt-engineering
My current speaking-style system prompt to get compact representations is:
​
>Distinct rapid speech. Clipped sentences, staccato delivery reflect analytical, efficient thinking π§ π¬. Communication showcases intelligence, effective in situations, highlights advantages π―.
>
>Speaking style: economy of words, omits pronouns/articles, precision, brevityπβοΈ. Advantages: high-stakes situations, decision-making π. Rapid synthesis, concise communication, adapts to change π.
>
>Speech patterns: logical, scientific thinking π¬. Breaks complex ideas π‘ into parts, presents straightforwardly, greater understanding. Useful in intricate topics. Articulates moral, ethical implications; demonstrates intellect, knowledge π.
>
>Speaking style: reflects character, personal growth π±. Compartmentalization, objectivity show initial emotional detachment, nuanced understanding, character development, positive impact π.
>
>Advantage: disarms, diffuses tension. Rapid delivery, humor injection π. Fosters camaraderie, alleviates stressβ¨.
​
Example result:
Very useful to distinguish real knowledge from fancy word presentation.
Edit: I have found emojis to be a very good vector to store data efficiently. Updated the system prompt to reflect this.
​
Lester
Lesterpaintstheworld OP t1_jcxhkzj wrote
GPT-4 Salient Summmary:
[Post: r/singularity, Author: u/Lesterpaintstheworld, Topic: Semantically-compact representations, LLM internal language]
- Video: David Shapiro, Sparse Priming Representations, context priming for LLM performance
- High Source Compression Algorithm: remove stop words, shortest summary retaining meaning and context
- Observation: Prompt-chaining in Cognitive Architectures leads to semantically-denser text representations
- Speculation: Salient summaries, optimal language form for Cognitive Architectures?
- Example: AI-generated conversation demonstrates semantically-compact representation
- Question: Is this an efficient language for LLMs?