LurkAroundLurkAround
LurkAroundLurkAround t1_iyee9j1 wrote
Reply to [D] Choose a topic from neural networks by Mikesblum
I think Batch Normalization is a great topic
LurkAroundLurkAround t1_ixf68yn wrote
Reply to comment by Amortize_Me_Daddy in [R] Human-level play in the game of Diplomacy by combining language models with strategic reasoning — Meta AI by hughbzhang
AlphaGo was beating the best, this is, according to the post, a top 10% player, which most likely means 9.x% percentile. This also includes players with more than 1 game, but they played 40 games. So just by allowing a bunch of 2 games player they up their stats. A fair comparison would have been to take players with at least 40 games, sample 40 games randomly and compute the score, and then check the performance on this subtrata.
Not to take away anything from the team, but given how the the results are framed, my instinct is to believe that this is a bit oversold.
LurkAroundLurkAround t1_ivup7pc wrote
Reply to [Discussion] Can we train with multiple sources of data, some very reliable, others less so? by DreamyPen
By far the easiest thing to do is to feed in the data source as a feature. This should allow the model to generalize across datasets as much as possible, while accounting for different inherent properties of the data
LurkAroundLurkAround t1_ivliqj0 wrote
Can you look at long views? They are generally much more informative. Or view more comments interactions
LurkAroundLurkAround t1_iyeewd8 wrote
Reply to [D] CPU - which one to choose? by krzaki_
I would suggest generally try to look at the number of cores and cache memory as 2 important numbers which are useful for training ML models