Viewing a single comment thread. View all comments

hophophop1233 t1_iyb2ytg wrote

What exactly is the current state of ai/ml and how do I learn it because I need to catch up. I’ve played around building my own networks in keras.

2

ThisIsMyStonerAcount OP t1_iyb4wme wrote

Well, I don't know at what level you're at, but I'm assuming you're undergrad and will keep this high level:

Well, we have models that have some understanding of text (e.g. GPT-3), and some notion of images (anything since ResNet or even AlexNet). Mostly in the vague sense that when we feed text or images into these "encoder" networks, they spit out a "representation vector" (i.e., a bunch of hard to decrypt numbers). We can feed those into "decoder" networks that do sensible things with those vectors (e.g. tell you that this vector is most likely of class "husky" at this and that position, or produce text that is the logical continuation of whatever textprompt you give it). We can train huuuuuuuuuuuuuge models like that (Billions of parameters to learn, probably cost 10^6$ to train for the first time). Very recently (last 1-2 years) we've learned to combine these two models (e.g. CLIP). So you can feed in text and get out an image (e.g. Stable Diffusion), or feed in text and image and get out whatever the text said to get out of the image (e.g. Flamingo).

That's roughly where we are in terms of big picture. Currently, we're working on better ways to train these models (e.g. by requiring less to no supervised data), or find out how they scale with input data and compute, or get smaller models out of the big ones, or whether to name the big ones "foundational" or "pretrained", or find creative ways to use or improve e.g. Stable Diffusion and similar models for other applications like reading code, as well as bunch of other stuff. No idea what the next big idea will be after. My hunch is on memory (or rediscovering recurrent nets).

Edit: this was extremely Deep Learning centric, sorry. There's of course other stuff going on: I don't follow Reinforcement Learning (learning over time with rewards) at all, so no clue about that, though it's arguably important for getting from ML to more general AI. Also, there are currently lots of issues being raised wrt. Fairness and Biases in AI (though I have seen almost no papers on it this year, why is that?). And more and more, people start to reason about "causality", i.e., how to go from correlation of things to causation between things... lots of other stuff outside my bubble.

23