ThisIsMyStonerAcount
ThisIsMyStonerAcount t1_j6n7cqn wrote
Reply to comment by TravellingTabby in [OC] My life in 2022, tracked across 12 metrics! by TravellingTabby
you can encode binary as 0 / 1 or -1/+1. There are specific correlation methods for binary data (e.g. MCC), but Spearman or even Pearson Correlation will probably work just fine.
On a non-technical note: I kind of almost expected mood to be correlated with weather :D
ThisIsMyStonerAcount t1_j6mecyp wrote
Can we get a correlation matrix? That would be the most insightful thing to gain from this, IMO?
ThisIsMyStonerAcount t1_j2xw4yl wrote
Reply to [D]There is still no discussion nor response under my ICLR submission after two months. What do you think I should do? by minogame
There's a lot of context that's lacking here. What were the initial reviews like? If they were negative enough that everyone felt the paper couldn't be salvaged or there were major critical flaws, then probably no-one felt the need to waste any time looking at it twice. Or it could also be that your rebuttal did not address the critical issues pointed out by the reviewers. Or the reviewers plainly all sucked. If you'd be willing to link to the openreview page, it'd be easier to give you suggestions on what you should do next.
ThisIsMyStonerAcount t1_j2qum1u wrote
I started my PhD when I was 27 (and finished when I was 34 FWIW), and now work in a Big Tech AI lab. Age-wise you're definitely still fine.
ThisIsMyStonerAcount t1_j1q7cy6 wrote
Reply to [D] SE for machine learning reaserch by sad_potato00
ThisIsMyStonerAcount t1_j0tj2ew wrote
Reply to comment by retard-moron in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
how about Kaiming He?
ThisIsMyStonerAcount t1_izakcfe wrote
Reply to comment by KingRandomGuy in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
That's actually what I meant, thanks for pointing it out! Edited
ThisIsMyStonerAcount t1_iza1qzy wrote
Reply to comment by Blutorangensaft in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
What nonlinearity would solve the issue? The usual ones we use today certainly wouldn't. Are you thinking a 2nd order polynomial? I'm not sure that's a generally applicable function, with being non-monotonical and all?
(Or do you mean a hidden layer? If so: yeah, that's absolutely hindsight bias).
ThisIsMyStonerAcount t1_iz9c2oo wrote
Reply to comment by Blutorangensaft in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Minsky's work was very relevant because at the time the perceptron was the state of the art, and that algorithm can't solve XOR. That's why his work started the first AI winter.
ThisIsMyStonerAcount t1_iz96wlt wrote
Reply to [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
I think you only mean "ML", so I'll leave out symbolic approaches. I'll also mostly focus on Deep Learning as the currently strongest trend. But even then 20 papers wouldn't be enough to summarize a trajectory, but they'd be able to give a rough overview of the field.
Papers might not be the right medium for this, so I'll also use other publications. Off the top of my mind, it would be the publications that introduced (too lazy to look them up). In roughly temporal order from oldest to newest
- Bayes Rule
- Maximum Likelihood Estimation (this is a whole field, not a single paper, not sure where it got started)
- Expectation Maximization
- Perceptron
- Minsky's "XOR is unsolvable" (i.e., the end of the first "Neural Network" era)
- Neocognitron
- Backprop
- TD-Gammon
- Vanishing Gradients (i.e., the end of the 2nd NN era)
- LSTMs
- SVM
- RBMs (i.e., the start of Deep Learning and the 3nd NN era)
- ImageNet
- Playing Atari with Deep Reinforcement Learning
- Attention is All You Need
- AlphaGo
- GPT-3 (arguably this could be replaced by BERT, GPT-1 or GPT-2)
- CLIP
This is of course very biased to the last 10 years (because I lived through those).
ThisIsMyStonerAcount OP t1_iyfa33o wrote
Reply to comment by zeroows in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Some people were bothered by the political incorrectness, so it was renamed in 2017
ThisIsMyStonerAcount OP t1_iyerufm wrote
Reply to comment by RemarkableSavings13 in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
The swag is much better at neurips, though they do have open bars on the floor, so that's nice.
ThisIsMyStonerAcount OP t1_iyed1to wrote
Reply to comment by noop_noob in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
I wouldn't get my hopes up on anything big on that front. Sure, they could train a more compute efficient model (c.f. Chinchilla), but in general, it'll be incremental work, not ground breaking. I'd be surprised if OpenAI actually dedicated a lot of resources to improving GPT-3, it would not be their style. There's comparatively little to gain in terms of new breakthroughs, IMO.
ThisIsMyStonerAcount OP t1_iydwd3d wrote
Reply to comment by blabboy in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Inside joke for conference attendees. Has nothing to do with ML research.
ThisIsMyStonerAcount OP t1_iyd9tbj wrote
Reply to comment by SkeeringReal in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
ask me anything
ThisIsMyStonerAcount OP t1_iyd9s80 wrote
Reply to comment by random_boiler in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Usually yes, you'd only publish at ML venues if you think it's more important for the ML people to hear about your research than it is for the people in your own field.
ThisIsMyStonerAcount OP t1_iyd8ky3 wrote
Reply to comment by FirstOrderCat in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
I'd usually go to NeurIPS to present my own work, but this year I don't have a paper here. I came to mingle with other researchers, and catch up with old friends (people from my previous jobs/labs, people who left my current team, ex-interns, my advisor, co-authors from previous papers, random party acquaintances from parties at previous conferences, ....), make new ones, and see what other people are working on.
ThisIsMyStonerAcount OP t1_iyd86dz wrote
Reply to comment by ID4gotten in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
-
The recruiters who are there are still intent on hiring and getting to know people. But the big corps have shifted from "we're hiring everything and everyone" to "if you're outstanding in an area we care about, we'd love to have you". I haven't talked much to recruiters, but it seemed like they were still trying hard to find interested people.
-
Can't say. HR people were interested in me, though my badge clearly says that I already work in industry.
-
Haven't seen much happening in that sphere (which does not imply that it isn't out there).
ThisIsMyStonerAcount OP t1_iyd6gca wrote
Reply to comment by tastycake4me in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
The ticket was 1000 USD for industry people, which is a huge hike from previous years. Academic rate should've been much lower, but IDK. Then there's the hotel cost. Mine costs ~2k USD for the week, but I've heard people pay way less for an AirBnB. Travel costs depend too heavily on where you're coming from to give a decent estimate.
It's crowded, but that's always been the case. There's a few thousand people here. It's okay most of the time, although it is hard to talk to people during poster sessions, which is unfortunate.
Overall, I'm happy to be here again and meet my old friends and acquaintances that you make working in the field over time. I missed that. And the general sense of being around ML people 24/7, it's nice. On the other hand, I can't wait to have a real meal again and not just fast food all the time. I'd give it an 8/10.
ThisIsMyStonerAcount OP t1_iyd5quv wrote
Reply to comment by _thepurpleowl_ in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
no, the hot talk is WorkBoat, GPT-4 is vapor ware. But today is OpenAI's party, so if they're going to announce something, today would be the day, IMO.
ThisIsMyStonerAcount OP t1_iyd5ajw wrote
Reply to comment by Hrnaboss in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
That's not my area of expertise, so I don't really have any advice, sorry.
ThisIsMyStonerAcount OP t1_iyd57ok wrote
Reply to comment by Grouchy_Document7786 in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
JAX is a huge step up from Tensorflow, even though I aboslutely don't understand why it takes Google so many iterations to eventually land on a PyTorch API. Jax might be close enough that they'll give up trying, but I feel like it still falls short in being as good as Pytorch. But Google will definitely continue using it, and they're one of the main drivers of AI research. So at least in that area it'll see non-trivial adoption. I still think Jax is inferior to PyTorch, so there's no reason to switch (the better support for TPUs might be a selling point, so if Google gets serious about pushing those onto people, there might be an uptick in Pytorch->Jax conversion).
The productization story is way worse than in PyTorch, and even within google, production still uses TF. Until that changes, I don't think Jax will make big inroads in industry.
ThisIsMyStonerAcount OP t1_iyd4jfg wrote
Reply to comment by canbooo in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
what I meant is that you're asking me p(X=x)=0.2, where x is continuous, hence p(X=x) = 0.
ThisIsMyStonerAcount OP t1_iyd4aee wrote
Reply to comment by [deleted] in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
a deep disregard for the English language.
ThisIsMyStonerAcount t1_jdeqmjc wrote
Reply to [D] Which AI model for RTX 3080 10GB? by SomeGuyInDeutschland
What's your end goal?