Submitted by Ivanthedog2013 t3_yebk5c in singularity
sqweeeeeeeeeeeeeeeps t1_itzv3q3 wrote
Reply to comment by nihal_gazi in AGI staying incognito before it reveals itself? by Ivanthedog2013
“I have almost figured out an algorithm for an AGI” lmao no you have not. you’re in high school claiming you are the closest person to solving agi rn as a “AI researcher”
nihal_gazi t1_itzz78k wrote
Maybe
sqweeeeeeeeeeeeeeeps t1_itzzc07 wrote
I’m hoping you at least published in top conferences?
nihal_gazi t1_itzzo22 wrote
What's the need?
sqweeeeeeeeeeeeeeeps t1_itzzt2l wrote
Ok so your just spouting bs about agi and have nothing to back up your claims
nihal_gazi t1_iu00vj6 wrote
Yes, currently I don't, and that doesn't bother me. But I will be coding my algorithm within this year, and I have high hopes for its success, because as per thinking, it seems to be able to explain "literally ever human phenomenon", starting from complex emotions to logical thinking chains, and the best part is, it can work as fine as a human even on weak devices like a mobile phone. Over the past 2 years, I have developed over 70+ algorithms, many of which outperforms older state of the art algorithms in speed, and this time I might have hit the jackpot.
sqweeeeeeeeeeeeeeeps t1_iu03am3 wrote
Lmao this is too funny. I am sure you can easily outperform sota models “speed”, but does it have higher performance/accuracy. We use these overparameterized deep models to perform better, not be accurate. How do you know you can perform “as well as a human”? What tests are you running? What is the backbone of this algo. I think you have just made a small neural net and saying “look how fast this is”, but performs soooo much worse in comparison to actually big models. I am taking all of this with a grain of salt because you are in highschool and have no actual judgement of what sota models actually do
“70+ algorithms in the past year” is that supposed to be impressive? Are you suggesting the amount of algorithms you produce have any indicator of how they perform. How do you even tune 70 models in a year.
I have a challenge for you. Since you are in HS, read as much research as you can (probably on efficient networks or whatever you seem to like) and write a review paper of some small niche subject. Then start coming up with novel ideas for it, test it, tune it, push benchmarks and have as many legitimate comparisons to real world models. Then publish it.
nihal_gazi t1_iu051cf wrote
Hahaha. No Hell No. Please No Neural Nets. They are outdated and are Painfully Slow. I am not willing to expose my AGI algo, as it's not yet patented. No, I actually made an AI, that can learn and generate sentences faster than RNN(lstm), and that does not use Neural Net. It's a very simple algorithm. But right now, it can do nlg without nlp and I have made it into an android app. I may tell u the nlg algo if you want.
I can give solid reason why Neural Nets should be totally banned. Firstly, our Brain is wayyy developed. And if neural nets are to replicate a brain, it would take millions of years. No not because of training speed but because of evolution. You see, because of evolution, our brain has certain centres for processing certain senses. There is a place for vision, smell, touch etc.
Now, there is the catch. Everytime a neural net is built, it is similar to different aliens having different ways of perception of the world. None of the AIs could be able to share their thoughts and ideas. And that is why, evolutionary features come into play. Every human has common features in it. Neural nets don't have it.
sqweeeeeeeeeeeeeeeps t1_iu064z7 wrote
“It’s not yet patented” this sounds so ridiculously funny to me. Publish, progress research, be open to critics on your ideas, without you are just making backless claims. All I see is a hs student who has coded up his little ml algo and thinks it’s agi.
Why am I wasting my time entertaining this
Viewing a single comment thread. View all comments