elcric_krej
elcric_krej OP t1_j4a7zkl wrote
Reply to comment by zaptrem in [D] Mtruk alternatives for extracting information out of text by elcric_krej
Not the one doing the downvoting, but, isn't that same thing?
Validating a sample and having validation samples, when your problem has a "known solution" or is close to, is equivalent.
elcric_krej OP t1_j49jb2z wrote
Reply to comment by zaptrem in [D] Mtruk alternatives for extracting information out of text by elcric_krej
Not only have I tried, this is precisely what I am doing, human verification is the exact use case for a service like mturk :)
Submitted by elcric_krej t3_10augmv in MachineLearning
elcric_krej t1_iw7hss0 wrote
Reply to comment by master3243 in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
I guess so, but that doesn't scale to more than one team (we did something similar) and arguably you want to test across multiple seeds, assume some init + model are just very odd minima.
This seems to yield higher uniformity without constraining us on the rng.
But see /u/DrXaos for why not really
elcric_krej t1_ivy6jf4 wrote
This is awesome in that it potentially removes a lot of random variance from the process of training, I think the rest of the benefits are comparatively small and safely ignorable.
I would love if it were picked up as a standard, it seems like the kind of thing that might get rid of a lot of the worst seed hacking out there.
But I'm an idiot, so I'm curios what well-informed people think about it.
elcric_krej OP t1_j4atwg1 wrote
Reply to comment by zaptrem in [D] Mtruk alternatives for extracting information out of text by elcric_krej
Yes... that's the correct interpretation, hence why I need mturk, to get people to manually label (well extract) something from my training/validation data.
​
I'm still rather confused about where the misunderstanding is.