Submitted by gokulPRO t3_11pyvb3 in deeplearning

I recently saw Continual Learning (CL) growing, with several papers published recently that have considerable potential to impact real-world applications. Which topic (such as CV, RL, NLP, CL..) will be very relevant to research or be focused on a lot? And which topic do you think still needs a breakthrough and will have a significant impact in real-world applications, such as in the case of these LLM models in recent times? Feel free to mention your current topic of work and why you chose to do it 😊

12

Comments

You must log in or register to comment.

Philpax t1_jc0jcie wrote

The usual answer to this is "multimodal" and I think that's still true, especially with recent advances. We'll see in the next few months :)

9

N0bb1 t1_jc12rsa wrote

Symbolic AI

2

saintshing t1_jc26rk0 wrote

I feel like it should be possible to extend diffusion transformer technique to code generation for web development.

You can input a screenshot of a static webpage, then use a text prompt like 'Change the style to fit a futuristic theme', or just input a low fidelity UI wireframe and it can generate a detailed webpage with the html and css. We can get training data from the internet for self supervised learning.

Also retrieval transformers or models that know how to query APIs, databases and prompt other models.

2

errgaming t1_jc4an9h wrote

Graph Neural Networks. My primary research area is currently in GNNs, and I believe it is very very underrated.

2

qphyml t1_jc5y6ij wrote

Yes! Transforming regular data structures into graphs for analysis is a really powerful thing by itself. And then to utilize the powers in GNNs on top of that!

1

errgaming t1_jc734jn wrote

Thing is, a lot of these problems are always solvable my Tree (non DL) models if you design your data smartly. But GNNs have so much potential to capture information, especially with the use of context attention to get exactly the right patterns you're seeking.

2

atm_vestibule t1_jc0shl5 wrote

Which continual learning papers are you referring to?

1

ats678 t1_jc4lem7 wrote

In the same fashion as LLM, I think Large Vision Models and multimodal intersections with LLM are the next big thing.

Apart from that, I think things such as model quantisation and model distillation are going to become extremely relevant in the short term. If the trend of making models larger will keep running at this pace, it will be necessary to find solutions to run them without using a ridiculous amount of resources. In particular I can see people pre-train large multimodal models and then distill them for specific tasks

1

incrapnito t1_jca42rj wrote

Transfer learning. With the availability of these large models, it will be in demand to adapt them to smaller/use case specific datasets.

1