valdanylchuk
valdanylchuk OP t1_j1z073m wrote
Reply to comment by EducationalCicada in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
Very cool! And this paper is from 2019-20, and some of those I listed in my post are from 2018-19. I wonder how many of these turned out dead ends, and how far did the rest go by now. Papers for major conferences are often preprinted in advance, but sometimes DeepMind also comes out with something like AlphaGo or AlphaFold on their own schedule. Maybe some highly advanced Gato 2.0 is just around the corner?
valdanylchuk t1_j137hla wrote
Reply to comment by Purplekeyboard in [R] Nonparametric Masked Language Modeling - MetaAi 2022 - NPM - 500x fewer parameters than GPT-3 while outperforming it on zero-shot tasks by Singularian2501
From the paper:
>Extension for generation. It is currently non-trivial to use NPM for generation, since it is the encoder-only model. Future work can explore autoregressive generation as done in Patel et al. (2022) or use NPM for editing (Schick et al., 2022; Gaoet al., 2022).
So, don't expect to talk to it just yet.
valdanylchuk t1_itp5tfx wrote
Don't panic, it all adds up to normality.
There will be revolutionary developments, but there will be also tons of friction and inertia while they start affecting your everyday life.
Maybe adjust for less certainty about things staying as usual in future, but follow the common wisdom. Continue building your career, but keep in mind that you might have to change it. Keep a lively mind, learn new stuff. Make some savings (as someone suggested, at least to let you live for two years without a job), but don't count on coasting on your investment income forever. And so on.
Most importantly, find simple things to be happy about in the moment. Relations, hobbies, sports, etc. It is good to have some foresight; it is not healthy to "live in the future", especially if that makes you anxious.
valdanylchuk OP t1_it8w3iz wrote
Reply to comment by Hydreigon92 in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
Thank you for the excellent pointers! Exactly the kind I was hoping to find. I have too little background to even ask Google effectively. Maybe I can follow some of the articles there.
valdanylchuk OP t1_it86vex wrote
Reply to comment by idrajitsc in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
I agree my response was hand waving, but GP suggested just giving up any attempts of AI/ML based optimization proposals a priori, which I think is too strict. If we never attack these problems, then we can never win. And different people can try different angles of attack.
valdanylchuk OP t1_it7qv6d wrote
Reply to comment by throwawayP115LG in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
I understand the political problem. I think it still makes sense to try for better proposals on the technical side. Who knows, maybe some advanced AI can even balance the interests of multiple groups better. Or analyze the legislation history and political press of all the countries throughout human history, and find some correlations in what types of programs work best in combination under certain conditions. Stuff like that.
valdanylchuk OP t1_it752xr wrote
Reply to comment by rehrev in [D] Do any major ML research groups focus on policy-making applications? by valdanylchuk
No, not the only one. There is also risk of weponization, resource competition, all sorts of misunderstandings... Sometimes it feels like risks of AI are better researched than the benefits.
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
However, there are risks with any technology, starting with fire and metal working, and they are just something to guard against, not something to stop us from using the technology to our advantage.
In case of policy making, obviously making AI our God Emperor is not the first step we would jump at. It is about finding some correlations and balancing some equations.
Submitted by valdanylchuk t3_y9ryrd in MachineLearning
valdanylchuk t1_it6h6ef wrote
Reply to If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
Just wait until some AlphaZero of economic planning or politics. Then we will have major societal transformations for sure. Still, I think they will take decades to implement once unlocked, because of the friction of human bureaucracy and real world logistics.
valdanylchuk OP t1_iri3civ wrote
Reply to comment by yaosio in [R] Google AudioLM produces amazing quality continuation of voice and piano prompts by valdanylchuk
…and prepare a suitable dataset, and train the model. Those are huge parts of the effort.
With big companies teasing stuff like this (AlphaZero, GPT-3, DALL-E, etc.) all the time, I wonder if it is possible for the open community to come up with some modern day equivalent of GNU/GPL with a non-profit GPU time donation fund to make practical open source replicas of important projects.
Submitted by valdanylchuk t3_xy3zfe in MachineLearning
valdanylchuk OP t1_j1zb36a wrote
Reply to comment by lorepieri in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
> Who knows, maybe the right symbolic architecture has already been proposed 20-30 years ago and nobody took the effort to put into a modern GPU accelerated codebase.
I also half-expect that in ten years, what current LLMs do on racks of GPUs, will fit in a phone chip, because many advances in efficiency come from utilizing old simple techniques like Monte Carlo and nearest neighbors.