DeepGamingAI
DeepGamingAI t1_izwthb9 wrote
Reply to comment by tdgros in [D] Global average pooling wrt channel dimensions by Ananth_A_007
It's just like a girlfriend. "No I will not be offended if you did this" but then goes ahead and takes it personally when you do it.
DeepGamingAI t1_izwql0s wrote
Reply to comment by tdgros in [D] Global average pooling wrt channel dimensions by Ananth_A_007
>I'm not sure that's recommended, it's not immoral or illegal.
Humans may not consider that design choice immoral but I don't want to offend our soon-to-be AI overlords. Maybe I'll ask chatGPT if it will judge me for doing that.
DeepGamingAI t1_izwolrw wrote
Reply to comment by tdgros in [D] Global average pooling wrt channel dimensions by Ananth_A_007
Thanks, that clarifies some things. I have also seen a parameter in the ViT head that simply returns the first token representation instead of averaging across all tokens. I never understood why that made sense, and why only the first token and not some other random token.
This also reminds me of another confusion I have about transformers, would they lose meaning if we gradually compress the embedding size after every mlp in the transformer block?
DeepGamingAI t1_izwkqsg wrote
Reply to comment by [deleted] in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
I liked a reply to jurgen from some twitter user saying if you have already solved agi now would be a good time to bring that up.
DeepGamingAI t1_izwiy5k wrote
Don't vision transformers do this where instead of gradually compressing the input like a typical convnet, they maintain the high dimensionality throught all the blocks of the deep network, and then simply using a global pooling at the end to compress the "channel" dimension into a compressed representation? I have no idea why that works, but we have seen it does work and the model still learns despite the gradients flowing through this average pooling layer at the end. Would be great if someone can help clarify this for me.
DeepGamingAI t1_iuztzcm wrote
Reply to comment by nomadiclizard in [D] DALL·E to be made available as API, OpenAI to give users full ownership rights to generated images by TiredOldCrow
Latent spaces are the new real estate
DeepGamingAI t1_iuv83lt wrote
Reply to comment by curiousshortguy in [D] What are the benefits of being a reviewer? by Signal-Mixture-4046
>It's about enabling good scholarship and guiding researchers.
You just described the role of a discriminator in a gan
​
>uneducated and unqualified reviewers
op got an invite because they published there before, its on merit not a random review request. besides, the question solely focuses on how reviewing benefits the reviewer, it doesnt seem to cover the whole picture surrounding peer review system
DeepGamingAI t1_iut3xdc wrote
Reply to comment by Proud_Ad_5895 in [D] What are the benefits of being a reviewer? by Signal-Mixture-4046
All of the profits, none of the work. How do they get away with this model?
DeepGamingAI t1_iusdqfr wrote
Think of it as a GAN, you train generators when you publish and train your discriminator when you review. Do these processes together alternately and you can see what you'll slowly converge to :)
DeepGamingAI t1_itwigoe wrote
Reply to [D]Cheating in AAAI 2023 rebuttal by [deleted]
What? How does the author identify you from reviewers being able to see other reviewers?
DeepGamingAI t1_it2ieda wrote
Reply to comment by LegacyAngel in [D] How frustrating are the ML interviews these days!!! TOP 3% interview joke by Mogady
>interview at MAANG
*MANGA
DeepGamingAI t1_it2ia8u wrote
Reply to comment by demi12395 in [D] How frustrating are the ML interviews these days!!! TOP 3% interview joke by Mogady
>what they really need is github copilot
This would be such a cool reply email to a company asking for a coding interview round. Just send a link to copilot and tell the company this should fit your job profile better than I would.
DeepGamingAI t1_isigqfm wrote
DeepGamingAI t1_isigku1 wrote
Academic research in GANs has slowed down, but I believe they still have more industrial applications currently in use by tools like photoshop
DeepGamingAI t1_j7jq9jk wrote
Reply to [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
To me all AI debate these days are just a regurgitation of "glass half full or half empty" discussions. Yes, LLMs are far more intelligent than anyone anticipated them to be by this point in time, and no they aren't general intelligence. The constant back and forth between these two groups can essentially be replayed year after year and not much has changed in terms of arguments.