Mefaso
Mefaso t1_j9s66qq wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.
You still do, imo rightfully so
Mefaso t1_j9jgvoz wrote
Reply to comment by [deleted] in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Anything that scales sub-quadraticaly?
Anything "big-data"
Mefaso t1_j9dbyoz wrote
Reply to comment by Agreeable-Run-9152 in [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
>However there is a small Chance they acted in good faith and did Not see that the randomness in the GAN wont do anything.
Why is the default assumption malice?
Especially if the only benefit would be a workshop paper
Mefaso t1_j9dbox6 wrote
>IMO reviewers at these journals/conferences need to be more mindful of this kind of plagiarism/low-effort submission.
Workshops in general have a very low bar, this surely wouldn't have been published in the main track.
Other than that I don't really see the point of this rant.
Yes there are a lot of bad papers, there are a lot of bad papers even in the main tracks, you just kind of get used to it.
It feels a lot like hitting down a well. Maybe these are some undergraduates doing their first research project and it's more about learning the methodologies and writing rather than very novel approaches.
Mefaso t1_j95hl4n wrote
Reply to comment by RideOrDieRemember in [D] Things you wish you knew before you started training on the cloud? by I_will_delete_myself
Maybe try different regions?
Mefaso t1_j95hjkm wrote
Reply to comment by Demortus in [D] Things you wish you knew before you started training on the cloud? by I_will_delete_myself
>Running Linux on your desktop/laptop makes it significantly easier to run projects on the cloud
Just as a note, this can easily be done in a docker consider on windows as well.
Mefaso t1_j949aaa wrote
Reply to comment by dojoteef in [D] Please stop by [deleted]
Maybe we should consider adding more mods?
Mefaso t1_j948is2 wrote
Reply to [R] difference between UAI and AISTATS ? by ArmandDerech
They're more theory focused imo, especially aistats
Mefaso t1_j8d2j9d wrote
Reply to comment by Meddhouib10 in [R] I made a mistake in a recent submission, what to do ? by [deleted]
This being a conference, you can probably just fix it for the camera-ready version after acceptance
Mefaso t1_j6z6zgt wrote
Reply to comment by uhules in [D] Why is stable diffusion much smaller than predecessors? by dahdarknite
>DALL-E 2 also applies diffusion in latent space
Not really in the important part. Dalle2 uses diffusion in clip-"latent"-space and then conditions the pixel-diffusion model on the result.
However they still do a full diffusion pass in pixel-space, which is more complex than doing it in latent space, as LDMs do.
Mefaso t1_j6vdzji wrote
Reply to comment by Ne_Nel in [D] Why is stable diffusion much smaller than predecessors? by dahdarknite
Exactly, the entire point of Latent Diffusion Models was to make them smaller and faster
Mefaso t1_j61zim5 wrote
Reply to comment by NaturalGradient in [P] EvoTorch 0.4.0 dropped with GPU-accelerated implementations of CMA-ES, MAP-Elites and NSGA-II. by NaturalGradient
>If you want to run GPU-accelerated neuroevolution in Brax or IsaacGym, then keeping everything on GPU is absolutely relevant
Do you have evidence for that?
I would assume that running brax rollouts for example would take 100x as long as the actual cmaes
Mefaso t1_j4a9saf wrote
Reply to comment by ichigomashimaro in [D] Is MusicGPT a viable possibility? by markhachman
Isn't SoundCloud basically danbooru for music?
There might not be a nicely accessible dataset yet, but that probably won't stop major players
Mefaso t1_j49n4nc wrote
Reply to [D] Is MusicGPT a viable possibility? by markhachman
Copyright issues never stopped research in the past, so why would it be different for music?
Mefaso t1_j2d7aaa wrote
Reply to comment by ureepamuree in [R] 2022 Top Papers in AI — A Year of Generative Models by designer1one
This sounds interesting, any previous papers you would recommend?
Mefaso t1_j29980m wrote
>i found that text to video problem is being actively researched and may not require as much compute as bare language models
There are always opportunities for research with little compute, usually this means your research has to avoid training new models, or at least avoid training from scratch.
However, text to video models are typically very compute extensive
Mefaso t1_j1851kc wrote
Reply to comment by gBoostedMachinations in [D] Using "duplicates" during training? by DreamyPen
>Set aside a validation set
Important: Ensure the duplicates are not shared between validation and train data
Mefaso t1_izmyrsw wrote
Reply to comment by quagg_ in [D] When to use 1x1 convolution by Ananth_A_007
Oh that sounds very useful, you don't randomly happen to know a coded example of that?
Mefaso t1_izdjye3 wrote
The fact that big bench includes kanji-ascii art classification task is pretty funny.
But i guess if you want to have over a hundred tasks in a benchmark you run out of ideas at some point
Mefaso t1_iyv3qkl wrote
Reply to comment by Nik_uson in [D] This neural network was generated by a neural network by Nik_uson
Well, unless you actually train it and show some results, you're just putting out claims without much basis.
Don't get me wrong, it's cool in a way that the neural network provides code for another neutral network, but if it doesn't work properly it's not really helpful.
Mefaso t1_iytfmf1 wrote
Reply to comment by Nik_uson in [D] This neural network was generated by a neural network by Nik_uson
Works pretty well for what? You didn't say what this network is supposed to be used for
Mefaso t1_iyr2bjz wrote
That's just the most basic VAE/AE possible, I can't imagine it works very well for anything.
You could get a better result via Google in the same amount of time
Mefaso t1_iyckce7 wrote
Reply to comment by RobbinDeBank in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Apparently their parties also had strippers, but you had to know Russians to get in.
This is all just second hand though
Mefaso t1_iybouu5 wrote
Reply to comment by pythoslabs in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
There is a list of people who decided to share the paper and reviews despite rejection:
https://openreview.net/group?id=NeurIPS.cc/2022/Conference#rejected-papers-opted-in-public
Mefaso t1_j9s6l7i wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>Like, the same AIs that can cure cancer can also create highly dangerous bioweapons or nanotechnology.
A good example of this:
https://www.nature.com/articles/s42256-022-00465-9