arhetorical
arhetorical t1_j6xxijd wrote
Reply to comment by TrevorIRL in [N] OpenAI starts selling subscriptions to its ChatGPT bot by bikeskata
$20 is frankly a very reasonable price for anyone who uses it professionally. For people who just use to generate memes or students who want to cheat on homework it's less reasonable, but I don't think that's their target market (and in the case of cheating, something they actually want to avoid).
arhetorical t1_j6nhean wrote
Hiya, great work again! Maybe I'm outing myself a little here, but the code doesn't work on Windows machines, apparently because the processes are spawned instead of forked. I'm not sure it's an easy fix and maybe not worth the time (it works fine on WSL) but just thought I'd mention in case you weren't aware!
On the ML side, should this scale up pretty straightforwardly to CIFAR100 or are there things to be aware of?
arhetorical t1_izzryk4 wrote
Reply to comment by aleph__one in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Oh, I haven't heard about using SNNs for interpretability. I thought they were on the same level as ANNs. Sorry for all the questions, but can you elaborate on how they're more interpretable?
arhetorical t1_izxbkdf wrote
Reply to comment by aleph__one in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
I see, thanks. Why did you choose to use SNNs for your application instead of conventional ANNs? Are you using a neuromorphic chip?
arhetorical t1_izwxay5 wrote
Reply to comment by aleph__one in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
I mostly hear about surrogate gradient descent, what other methods work well in practice?
arhetorical t1_izfi323 wrote
Reply to Personal project for PhDs and scientists [P] by Cyalas
Sounds pretty cool! How recent are the papers that it searches? Does it automatically pull from arXiv or something?
arhetorical t1_izdkn6q wrote
Reply to comment by Reddituser2460155 in the biggest risk with generative AI is not its potential for misinformation but cringe. by hayAbhay
The prompt was something like "write the worst fanfic ever".
arhetorical t1_iz55woo wrote
Reply to VAE USING PYTORCH by Naive_Weird5939
Can you not just change the dimensions to work with CIFAR10?
arhetorical t1_iz3pwak wrote
Reply to the biggest risk with generative AI is not its potential for misinformation but cringe. by hayAbhay
This has terrifying implications for the future of humanity.
> Once upon a time in the Mushroom Kingdom, Sonic the Hedgehog and Mario had a wild night of partying. Sonic, in a drunken stupor, stumbled into Princess Peach's castle and stumbled upon her sleeping in her bed. Sonic, being the mischievous hedgehog that he is, decided to have some fun and began tickling her feet.
> To his surprise, Princess Peach woke up and instead of being angry, she found Sonic's antics amusing. The two of them laughed and played together for hours, eventually forming a close bond.
> Sonic, feeling bold, decided to confess his love for Princess Peach. But to his dismay, she rejected him and explained that she was already in a committed relationship with Mario.
> Heartbroken, Sonic retreated to his home in Green Hill Zone. He spent his days moping and feeling sorry for himself, until one day a portal opened up and whisked him away to another dimension.
> In this new world, Sonic discovered that he had the ability to transform into a werehog. He used his new powers to fight against evil and protect the innocent, but he could never shake the feeling of unrequited love for Princess Peach.
> Sonic continued to live in this new world, never truly happy but always doing what he could to help others. The end.
arhetorical t1_iwmwkn0 wrote
Reply to comment by eternal-abyss-77 in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
It's not, they're just explaining the positioning of the pixels in the figure.
arhetorical t1_iwh7ra9 wrote
Reply to comment by eternal-abyss-77 in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
I replied below.
arhetorical t1_iwh7pz0 wrote
Reply to comment by eternal-abyss-77 in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
The identity matrix can be different sizes:
https://en.wikipedia.org/wiki/Identity_matrix
As for the rotations, it's in reference to the top of figure 3 and the position of the blue and red pixel.
arhetorical t1_iwfq6kp wrote
Reply to comment by eternal-abyss-77 in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
I think it's better if you ask specific questions, rather than ask people to explain the whole thing. The explanation in the paper seems pretty clear to me already...
arhetorical t1_iwfoxgk wrote
Reply to Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Did you mean to post an explanation of your understanding of the paper?
arhetorical t1_iw16x4q wrote
It looks like a lot but there's nothing especially weird in there. If you spend some time tuning your model you'll probably end up with something like that too.
Adam - standard.
Linear warmup and decay - warmup and decay is very common. The exact shape might vary but cosine decay is often used.
Decreasing the update frequency - probably something you'd come up with after inspecting the training curve and trying to get a little more performance out of it.
Clipping the gradients - pretty common solution for "why isn't my model training properly". Maybe a bit hacky but if it works, it works.
The numbers themselves are usually just a matter of hand tuning and/or hyperparameter search.
arhetorical t1_ivb2sl1 wrote
Reply to comment by macORnvidia in bought legion 7i: Intel i9 12th gen, rtx 3080 ti 16 gb vram, 32 GB ddr5. need some confirmation bias (or opposite) to understand if I made the right decision by macORnvidia
I haven't tried doing that but if it's a similar resource requirement to prototyping (like if you'll be working with a pretrained model, not training one) then it should be fine. Again though, the biggest factor is whether you like it and if it works for you - since you bought a laptop instead of a workstation, you must have had a very good reason for needing one and none of us can answer that question for you. If you're not training, as long as your stuff fits in memory the specs don't matter that much.
arhetorical t1_iv8nb2t wrote
Reply to comment by macORnvidia in bought legion 7i: Intel i9 12th gen, rtx 3080 ti 16 gb vram, 32 GB ddr5. need some confirmation bias (or opposite) to understand if I made the right decision by macORnvidia
The only thing that matters is if you like it. The specs really don't matter that much. Either you'll be prototyping your model, in which case you'll just be training for an epoch or two and having better specs will only save you a little bit of time, or you'll be training it in which case a laptop is not going to cut it. An external GPU will just make your setup less portable without actually giving you the performance of a workstation.
arhetorical t1_iv8fays wrote
Reply to bought legion 7i: Intel i9 12th gen, rtx 3080 ti 16 gb vram, 32 GB ddr5. need some confirmation bias (or opposite) to understand if I made the right decision by macORnvidia
You already got the advice not to buy a laptop for deep learning. But if you're determined and understand that it's not a great idea to begin with, then any laptop with a compatible GPU is fine. You're prototyping, not actually training on it. If you like the one you got then just stick with it.
arhetorical t1_iuyrp4f wrote
Reply to comment by Niu_Davinci in Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
Are you going to pay for it?
arhetorical t1_j70ndxc wrote
Reply to comment by 2blazen in [N] OpenAI starts selling subscriptions to its ChatGPT bot by bikeskata
Isn't ChatGPT more advanced than the davinci models available through the API? In any case, the point is that if you use it for work, $20 is negligible compared to the time you'll save.