abc220022
abc220022 t1_j9t681p wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The shorter-term problems you mention are important, and I think it would be great for technical and policy-minded people to try to alleviate such threats. But it's also important for people to work on the potential longer term problems associated with AGI.
OpenAI, and organizations like them, are racing towards AGI - it's literally in their mission statement. The current slope of ML progress is incredibly steep. Seemingly every week it looks like some major ML lab comes up with an incredible new capability with only minor tweaks to the underlying transformer paradigm. The longer this continues to happen, the more impressive these capabilities look, and the longer we see scaling curves continue with no clear ceiling, the more likely it looks that AGI will come soon, say, over the next few decades. And if we do succeed at making AI as capable or more capable than us, then all bets are off.
None of this is a certainty. One of Yudkowsky's biggest flaws imo is the certainty with which he makes claims backed with little rigorous argument. But given recent discoveries, the probability of a dangerous long term outcome is high enough that I'm glad we have people working on a solution to this problem, and I hope more people will join in.
abc220022 t1_j2zm46w wrote
Reply to [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Once you have a model that performs as well as a human in some domain, you can then use reinforcement learning to get it to perform better. Of course doing this is easier in some places than others.
Submitted by abc220022 t3_100y331 in MachineLearning
abc220022 t1_j2hbqqi wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
This twitter thread and attached post in part talks about training a neural network to solve modular addition, and then backing out the algorithm the neural network learned, which was imo an unexpected algorithm: https://twitter.com/neelnanda5/status/1559060507524403200?lang=en
abc220022 t1_jdzrsbu wrote
Reply to comment by rfxap in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Part of the sales pitch behind LeetCode is that you are working on problems that are used in real coding interviews at tech companies. I believe that most LeetCode problems were invented well before they were published on the LeetCode website, so they still could appear in some form in their training data.