Viewing a single comment thread. View all comments

Aloekine t1_ircx86k wrote

Thanks, this was a helpful post. If I could ask a question,

Leaving aside the point about this being discovered with DRL (which is obviously astounding and cool), I’m trying to get a better sense of how widely applicable these new improvements found are. There’s another poster in the thread who’s much more pessimistic about this being particularly applicable or the benchmarks being particularly relevant, what’s your perspective?

2

foreheadteeth t1_irdomjk wrote

The matrix multiplication exponent currently sits at ω=2.3728596 and I don't think that's been changed with this paper. I don't think that any of the papers improving ω were in Nature, even though those are quite important papers, so I would advise people working in this field to send it to Nature in the future. That being said, I suspect Nature is not genuinely interested in improvements to ω; I suspect that the attraction for Nature is mainly in the "discovered with DRL", as you say.

In Fig 5, they claim speedups compared to standard implementations for matrix size N>8192, up to 23.9%. Also, they are faster than Strassen by 1-4%. That's not bad!

They also mentioned a curious application of their DRL, to find fast multiplication algorithms for certain classes of matrices (e.g. skew matrices). I don't think they've touched ω in that case either, but if their algorithms are genuinely fast at practical scales, that might be interesting, but again I didn't notice any benchmarks for reasonable matrix sizes.

(Edit: I had failed to notice the benchmark in Fig 5).

3