MrAcurite
MrAcurite t1_j8dnscj wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
I get that. I've come to actively hate a lot of the big, visual, attention-grabbing work that comes out of labs like OpenAI, FAIR, and to some extent Stanford and Berkeley. I work more in the trenches, on stuff like efficiency, but Two Minute Papers is never going to feature a paper just because it has an interesting graph or two. Such is life.
MrAcurite t1_j8d301x wrote
Reply to comment by gopher9 in [D] Quality of posts in this sub going down by MurlocXYZ
I'll take a look, thanks for the recommendation. Right now what I really want is a place to chat with ML researchers, primarily to try and get some eyes on my pre-prints before I submit to conferences and such. I'm still kinda new to publishing, my coworkers aren't really familiar with the current state of the ML publishing circuit, and I could always use more advice.
MrAcurite t1_j8c9u48 wrote
Reply to comment by daking999 in [D] Quality of posts in this sub going down by MurlocXYZ
I joined the Sigmoid Mastodon. It's a wasteland of people posting AI "art," pseudo-intellectual gibberish about AI, and nonsense that belongs on the worst parts of LinkedIn.
MrAcurite t1_j876ld0 wrote
Reply to comment by PurpleAntifreeze in The Invisible Extinction (2022) - How the loss of our internal microbiome may be linked to the rise in obesity, childhood allergies and autism. [01:20:00] by cherrybounce
Your mom is vacuous and stupid. Autism isn't a disease, it's just that normal people are dicks and can't handle our appreciation for subject matter expertise, our ability to actually listen to the words people say, or our enormous dicks. For evidence regarding that last one, well, as your vacuous and stupid mom.
MrAcurite t1_j84zae1 wrote
Reply to comment by RSomnambulist in Framework now sells 2TB Steam Deck upgrade drives. by SUPRVLLAN
Interesting. I seem to recall losses as high as 40%, so 8% is an improvement. I'm still somewhat iffy about the usecases though, where laptop + eGPU beats both laptop with dGPU and laptop + desktop. I'm sure it's good for someone, though.
MrAcurite t1_j81hacz wrote
Reply to comment by [deleted] in Framework now sells 2TB Steam Deck upgrade drives. by SUPRVLLAN
... What? I don't even follow your argument. All laptops with dGPUs and AMD CPUs are imitation Steam Decks?
MrAcurite t1_j80f0vt wrote
Reply to comment by Kevo05s in Framework now sells 2TB Steam Deck upgrade drives. by SUPRVLLAN
I'm interested mostly in an internal dGPU, rather than an eGPU. I have a workstation/gaming desktop, and can offload the really computationally intensive tasks to it. So the dGPU is really if I want to do some light gaming, or more likely, make sure that some Torch code is talking to CUDA correctly before I bother getting out of bed to turn my workstation on.
I think, honestly, that eGPU enclosures are kind of inefficient, and only make sense for a very narrow range of budgets and usecases. The tech behind them is absolutely fucking cool as Hell, but given the choice between paying a few hundred bucks to hook up a graphics card to my laptop, or paying a few hundred bucks more than that to just have a second, separate computer that I can use for compute and Steam Play and such, I'd probably go with the latter option.
MrAcurite t1_j80bp20 wrote
It's nice to see Framework branching out into other areas involving repairing and upgrading devices. Just selling laptops and components to their own customers, however well they might be doing it, is definitely less stable as a business than diversifying.
I'd still like to see a dGPU option for the Framework chassis, and an AMD mainboard, but I'm happy with this.
MrAcurite t1_j6msvtn wrote
Reply to comment by qalis in [D] Have researchers given up on traditional machine learning methods? by fujidaiti
The customers I build models for insist on interpretability and robustness, which deep learning just doesn't give them right now. Actually just got a conference paper out of a modification to a classical method, which was kinda fun.
MrAcurite t1_j6jlqmi wrote
That I don't want to.
MrAcurite t1_j6gr1n8 wrote
Reply to comment by a_khalid1999 in [D] AI Theory - Signal Processing? by a_khalid1999
I would argue that EE is actually a better major than CS for ML. It beefs up your Math and Statistics chops with DiffEQ, Quantum, and the like, and also includes enough Linear Algebra and Statistics to get you sorted. As a Math major doing ML research, I'm kind of embarrassed by how weak my background in Signal Processing is, and am working through a textbook on DSP in my spare time to fix that.
MrAcurite t1_j4t9ch1 wrote
Reply to comment by Zealousideal_Low1287 in [P] A small tool that shuts down your machine when GPU utilization drops too low. by nateharada
At work, we've got this thing that will notify you if a cloud instance has been running for 24 hours. However, it does this by messaging your work email, you can't configure it to go to a personal device or anything. Meaning, if you set a job to run at the end of the week, you can come back on Monday to over a thousand dollars of cloud charges and like fifty angry emails about it.
MrAcurite t1_j2ru24j wrote
Reply to comment by notyourregularnerd in [D] life advice to relatively late bloomer ML theory researcher. by notyourregularnerd
I'm planning on applying to the ETH once I finish my MS, mostly because I think the whole "ask a professor to hire you" schtick might be easier than getting in somewhere with a more formal application, given my great work experience and fucking dogshit undergraduate performance. Also, it's a three year program with no coursework and an actually decent stipend, compared to US programs that might average five years and pay barely enough to eat or pay rent.
MrAcurite t1_j2rt62o wrote
Reply to comment by notyourregularnerd in [D] life advice to relatively late bloomer ML theory researcher. by notyourregularnerd
Maybe you should check out some of the labs at the ETH Zurich? Yeah, you'd have to put up with Schweizzerdeutsch for three years, but it seems like they're doing some interesting work in the area.
MrAcurite t1_j2rrglj wrote
Reply to comment by notyourregularnerd in [D] life advice to relatively late bloomer ML theory researcher. by notyourregularnerd
I think that's a fair criticism of applied ML as a field. I've definitely described Deep Learning as alchemy to friends.
For my work, the people who are paying for the models have a... sizable interest in confirming that the models will actually work in the field, so on occasion I've been called on to modify classical methods to fit, rather than just throwing neural networks at everything. Maybe you would like that kind of thing? Or, otherwise, there are a lot of people going after interpretability and robustness, and some interesting progress has been made.
MrAcurite t1_j2rplc1 wrote
Everyone else has addressed starting your PhD at 27, and they better be right, as I likely won't be starting my own PhD for another few years.
But, regarding the value of pure ML theory research e.g. convergence bounds, versus practical ML research e.g. quantization methods, my personal feeling for quite some time has been that purely theoretical ML research has been predominantly bunk. Machine Learning is so high dimensional that things that can't be proven universal can be nearly guaranteed probabilistically, and things that can be shown to be possible can be staggeringly unlikely; for example, just because the No Free Lunch theorem exists, doesn't mean that Adam won't work in the vast, vast majority of cases.
Someone with a PhD in pure ML theory, if they're good, is probably still perfectly capable of heading to industry and making bank, whether that's continuing to do ML theory research, moving over to applications, or just becoming a quant or something. But honestly? I just find screwing around with training models and shit to be way more fun, and you should try it, if you haven't already.
MrAcurite t1_j2igfva wrote
Reply to comment by Competitive-Rub-1958 in [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Interesting. I'll have to add that paper to my reading list.
MrAcurite t1_j2h2ei1 wrote
Reply to comment by currentscurrents in [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
You can teach a neural network to solve, say, mazes in a 10x10 grid, but then you'd need to train it again to solve them in a 20x20 grid, and there would be a size at which the same model would simply cease to work. Whereas Dijkstra's, even if it slows down, would never fail to find the exit if the exit exists.
You might be able to train a model to find new strategies in a specific case, analyze it, and then code your understanding of it yourself, kinda like using a Monte Carlo approach to find a numerical answer to a problem before trying an analytic one. But you're not going to be able to pull an algorithm out of the parameters directly.
MrAcurite t1_j2h1fbs wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
I think what you're talking about is, essentially, having the neural network learn an algorithm, and then pulling the learned algorithm out of the network, and standing it up in code.
Sadly, that's not how the underlying dynamics of the neural network operates. We're doing statistical function approximation, there's not really all that much that's fundamentally "algorithmic" about the network. For example, a sort function is necessarily general over the entire domain of the entries for which it is valid, whereas a neural network will only approximate a function over the subfield of the domain for which it was trained, all bets are off elsewhere; it doesn't generalize.
Maybe you could pull something out of a Neural Turing Machine or a Spiking Neural Network, but even then, you're running into tremendously difficult problems in interpretability.
MrAcurite t1_j17amh6 wrote
Reply to [D] Using "duplicates" during training? by DreamyPen
Just make sure not to let the duplicates result in bleeding between the training and test sets.
Submitted by MrAcurite t3_zpbsat in MachineLearning
MrAcurite t1_izxrgis wrote
MrAcurite t1_izagr44 wrote
Reply to comment by Nameless1995 in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
The 'C' in 'ChatGPT' stands for "Confident Bullshitting."
The 'hat' identifies this as merely an approximation of confident bullshitting.
MrAcurite t1_j9tzdl6 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Eliezer Yudkowsky didn't attend High School or College. I'm not confident he understands basic Calculus or Linear Algebra, let alone modern Machine Learning. So yes, I will dismiss his views without seriously engaging with them, for the same reason that any Physics professor will dismiss emails from cranks talking about their "theories."