Viewing a single comment thread. View all comments

ResponsibilityNo7189 t1_j02dzwf wrote

It's an open problem to get your network probabilities to be calibrated. First you might want to read aleatoric vs. epistemic uncertainty. https://towardsdatascience.com/aleatoric-and-epistemic-uncertainty-in-deep-learning-77e5c51f9423

MonteCarlo sampling and training have been used to get a sense of uncertainty.

Also changing the Softmax temperature to get less confident outputs might "help".

10

alkaway OP t1_j02oy3b wrote

Thanks so much for your response! Is temperature scaling the go-to calibration method I should try? Does temperature scaling change the relative ordering of the probabilities?

2

ResponsibilityNo7189 t1_j02t093 wrote

does note change the order. It will make the prediction less "stark", i.e. instead of .99 and 0.0001 0.002 0.007, you will get something like 0.75, 0.02, 0.04, 0.19 for instance. It is the easiest thing to do, but remember there isn't any "go-to" technique.

3