Submitted by lmtog t3_10zix8k in MachineLearning
bubudumbdumb t1_j84w7r2 wrote
Reply to comment by lmtog in [D] Transformers for poker bot by lmtog
Correct but the goal is not to train but to infer. I am not saying it wouldn't work just that I don't see why the priors of a transformer model would work better than RNNs or LSTMs in modeling the rewards of each play. Maybe there is something that I don't get about pocker that maps the game to graphs that can be learned through self attention.
Viewing a single comment thread. View all comments