Submitted by super_deap t3_11tmpc5 in MachineLearning
royalemate357 t1_jcnjaeo wrote
Reply to comment by mike94025 in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
oh cool, thanks for the clarification. Nice that you folk made it more backend independent. Would be interesting to try it out on amd/mps devices, i wonder if those requirements are met on those devices though.
mike94025 t1_jcv7ltl wrote
You might look into https://github.com/pytorch/pytorch/pull/95793.
Viewing a single comment thread. View all comments