Submitted by lmtog t3_10zix8k in MachineLearning
lmtog OP t1_j84uw2j wrote
Reply to comment by bubudumbdumb in [D] Transformers for poker bot by lmtog
But technically it should be possible to train the model on hands, in the mentioned representation, and get an output that would be a valid poker play?
bubudumbdumb t1_j84w7r2 wrote
Correct but the goal is not to train but to infer. I am not saying it wouldn't work just that I don't see why the priors of a transformer model would work better than RNNs or LSTMs in modeling the rewards of each play. Maybe there is something that I don't get about pocker that maps the game to graphs that can be learned through self attention.
Viewing a single comment thread. View all comments