valdanylchuk

valdanylchuk OP t1_j1zb36a wrote

> Who knows, maybe the right symbolic architecture has already been proposed 20-30 years ago and nobody took the effort to put into a modern GPU accelerated codebase.

I also half-expect that in ten years, what current LLMs do on racks of GPUs, will fit in a phone chip, because many advances in efficiency come from utilizing old simple techniques like Monte Carlo and nearest neighbors.

10

valdanylchuk OP t1_j1z073m wrote

Very cool! And this paper is from 2019-20, and some of those I listed in my post are from 2018-19. I wonder how many of these turned out dead ends, and how far did the rest go by now. Papers for major conferences are often preprinted in advance, but sometimes DeepMind also comes out with something like AlphaGo or AlphaFold on their own schedule. Maybe some highly advanced Gato 2.0 is just around the corner?

9

valdanylchuk t1_j137hla wrote

From the paper:

>Extension for generation. It is currently non-trivial to use NPM for generation, since it is the encoder-only model. Future work can explore autoregressive generation as done in Patel et al. (2022) or use NPM for editing (Schick et al., 2022; Gaoet al., 2022).

So, don't expect to talk to it just yet.

7

valdanylchuk t1_itp5tfx wrote

Don't panic, it all adds up to normality.

There will be revolutionary developments, but there will be also tons of friction and inertia while they start affecting your everyday life.

Maybe adjust for less certainty about things staying as usual in future, but follow the common wisdom. Continue building your career, but keep in mind that you might have to change it. Keep a lively mind, learn new stuff. Make some savings (as someone suggested, at least to let you live for two years without a job), but don't count on coasting on your investment income forever. And so on.

Most importantly, find simple things to be happy about in the moment. Relations, hobbies, sports, etc. It is good to have some foresight; it is not healthy to "live in the future", especially if that makes you anxious.

3

valdanylchuk OP t1_it7qv6d wrote

I understand the political problem. I think it still makes sense to try for better proposals on the technical side. Who knows, maybe some advanced AI can even balance the interests of multiple groups better. Or analyze the legislation history and political press of all the countries throughout human history, and find some correlations in what types of programs work best in combination under certain conditions. Stuff like that.

−1

valdanylchuk OP t1_it752xr wrote

No, not the only one. There is also risk of weponization, resource competition, all sorts of misunderstandings... Sometimes it feels like risks of AI are better researched than the benefits.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

However, there are risks with any technology, starting with fire and metal working, and they are just something to guard against, not something to stop us from using the technology to our advantage.

In case of policy making, obviously making AI our God Emperor is not the first step we would jump at. It is about finding some correlations and balancing some equations.

1

valdanylchuk t1_it6h6ef wrote

Just wait until some AlphaZero of economic planning or politics. Then we will have major societal transformations for sure. Still, I think they will take decades to implement once unlocked, because of the friction of human bureaucracy and real world logistics.

8

valdanylchuk OP t1_iri3civ wrote

…and prepare a suitable dataset, and train the model. Those are huge parts of the effort.

With big companies teasing stuff like this (AlphaZero, GPT-3, DALL-E, etc.) all the time, I wonder if it is possible for the open community to come up with some modern day equivalent of GNU/GPL with a non-profit GPU time donation fund to make practical open source replicas of important projects.

3