Submitted by chaitjo t3_10r31eo in MachineLearning
nombinoms t1_j6x36k6 wrote
Reply to comment by fraktall in [R] On the Expressive Power of Geometric Graph Neural Networks by chaitjo
Well when you consider the fact that every transformer is based on self-attention, which is a type of GNN, I'd say they are getting quite a bit of attention (no pun intended).
chaitjo OP t1_j70k5tr wrote
In a sense, yes indeed!
For those who are curious, check out this blogpost from me: Transformers are Graph Neural Networks - https://thegradient.pub/transformers-are-graph-neural-networks/
It explores the connection between Transformer models such as GPTs and other LLMs for Natural Language Processing, and Graph Neural Networks. It is now one of the top-3 most read articles on The Gradient and features in coursework at Cambridge, Stanford, etc.
fraktall t1_j6z1m35 wrote
Damn, I had no idea, thx, will now go read papers
Viewing a single comment thread. View all comments