Viewing a single comment thread. View all comments

foreheadteeth t1_irdomjk wrote

The matrix multiplication exponent currently sits at ω=2.3728596 and I don't think that's been changed with this paper. I don't think that any of the papers improving ω were in Nature, even though those are quite important papers, so I would advise people working in this field to send it to Nature in the future. That being said, I suspect Nature is not genuinely interested in improvements to ω; I suspect that the attraction for Nature is mainly in the "discovered with DRL", as you say.

In Fig 5, they claim speedups compared to standard implementations for matrix size N>8192, up to 23.9%. Also, they are faster than Strassen by 1-4%. That's not bad!

They also mentioned a curious application of their DRL, to find fast multiplication algorithms for certain classes of matrices (e.g. skew matrices). I don't think they've touched ω in that case either, but if their algorithms are genuinely fast at practical scales, that might be interesting, but again I didn't notice any benchmarks for reasonable matrix sizes.

(Edit: I had failed to notice the benchmark in Fig 5).

3