Singularian2501
Singularian2501 OP t1_iwq1iph wrote
Reply to comment by lostmsu in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
https://www.lesswrong.com/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works
A lesswrong article I have found that explains how efficient zero works.
In my opinion the author wants to say that systems like efficient zero are more efficient in their data usage and could be used for llm also to increase their sample efficiency.
To be honest I hope that my post gets so much attention that the author of the paper can answer our questions.
Singularian2501 OP t1_iwpzwii wrote
Reply to [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
https://www.lesswrong.com/posts/Couhhp4pPHbbhJ2Mg/will-we-run-out-of-ml-data-evidence-from-projecting-dataset Lesswrong discussion about the paper.
Singularian2501 OP t1_iwnpy8m wrote
Reply to comment by lostmsu in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
Yes they mentioned it at the end of their blog article. But I think it was only meant as an example how better sample efficiency could be achieved and not SOTA related.
Singularian2501 OP t1_iw9neym wrote
Reply to comment by rixtil41 in Theories of consciousness - Seth, A.K. and Bayne, T. (2022). by Singularian2501
I prefer: Identity Theory, Cognitivism, Higher Order Theory and Funktionalism
These theories single or in combination could explain consciousness in my opinion. But in the end the science community has to decide how valid this and other patters are. After that we should be able to look for consciousness in machine intelligences and other liveforms. ( Added this comment for clearification of the other comment I made moments ago. I hope that helps. )
Singularian2501 OP t1_iw9mksu wrote
Reply to comment by rixtil41 in Theories of consciousness - Seth, A.K. and Bayne, T. (2022). by Singularian2501
Lets say science detemines in the future that consciousness is just thoughts about thoughts ( Higher oder Theory ) then you could look after that pattern or functionality in artifical neural networks and this way determine if that machine is conscious or not. In a way there a possible consciousness patterns that need to be determined if the are valid or a few of them together ( the ones I prefer ( Infografic ) in combination are a logical answer to me ) . After that you only need to look after these patterns in machine intelligences or other live forms. It´s only pattern matching and validation after that. I don´t accept magic or metaphysic as an answer for consciousness because metaphysic will become just physic when the definitive answer is found what consciousness is.
Singularian2501 OP t1_iw94tem wrote
I think posts like this are important to be able to determine in the future whether a machine has developed consciousness. As well as to help create AIs with consciousness or to find out if the already developed AI has consciousness. It would also help in answering the question if the Proto-AGI proposed here: https://www.facebook.com/groups/DeepNetGroup/permalink/1773531039706437/ would already have consciousness!
Singularian2501 OP t1_iw87087 wrote
u/yuli-ban I am intested in your opinion of the following link: https://www.facebook.com/groups/DeepNetGroup/permalink/1773531039706437/ Do you think something like that will or could be realised next year? Also what do you think about that rumor about GPT-4: https://www.reddit.com/r/singularity/comments/ysoyq4/the_ceo_of_openai_had_dropped_hints_that_gpt4_due/?utm_source=share&utm_medium=web2x&context=3
Singularian2501 OP t1_ivw4hoz wrote
Reply to comment by SerialPoopist in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
The proto-AGI could wih its long term memory and ability to grow its neural network should be able to programm much better than Codex or alpha-code. While also understanding the software achtitectures much better and thus be able to help create a monolithic (solved in one architecture and not like the proto-AGI that is more like a patchwork of different programms) AGI that is maybe build a little like https://futureai.guru/technologies/brian-simulator-ii-open-source-agi-toolkit/ but much better scalable and usable and thus 2-3 orders of magnitude faster and effective ( maybe even usable for robots by then ).
Singularian2501 t1_ivblxer wrote
Reply to comment by GoGayWhyNot in I developed a memory and knowledge system for GPT-3 DaVinci. by [deleted]
The idea of scratch pads https://arxiv.org/pdf/2112.00114.pdf was also quite simple but it got made into a paper! A paper is usually made in a way that clarifies the concept. A simple post without proof or even a link to the code can´t deliver the needed clearity.
Singularian2501 t1_ivb8fv2 wrote
Github Link or a paper? You don´t even have a screenshot of the conversations or how the code works. As long as that is the case I will not give you an upvote!
I only don´t give you a downvote right now because I like the idea.
Singularian2501 OP t1_iuj0x2c wrote
Reply to comment by shaktiman101 in [N] Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333 by Singularian2501
Normally I download the video and then watch them while being on the train.
Singularian2501 t1_itw560i wrote
Singularian2501 OP t1_it9bbao wrote
Singularian2501 t1_ir69m9u wrote
Nature Paper: https://www.nature.com/articles/s41586-022-05172-4
Singularian2501 t1_ir696wb wrote
Reply to [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Blog Article from Deepmind about the paper: https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
Singularian2501 OP t1_j11bgj5 wrote
Reply to comment by CatalyzeX_code_bot in [R] Nonparametric Masked Language Modeling - MetaAi 2022 - NPM - 500x fewer parameters than GPT-3 while outperforming it on zero-shot tasks by Singularian2501
The github link is broken. That was also the reason I didn´t include it in the post. The paper is not from me! Also searched on paperswithcode but they also dont have a working link.Edit the link is working now: https://github.com/facebookresearch/NPM !