Viewing a single comment thread. View all comments

arcxtriy OP t1_j24b90c wrote

But then it is not guaranteed that the probabilities for a sample sum up to 1. That's seems strange, right?!

1

HateRedditCantQuitit t1_j24cg3y wrote

If you train a model on a dataset of dogs and cats, then show it a picture of a horse, do you want p(dog)+p(cat) = 1?

3

arcxtriy OP t1_j24yx4p wrote

If p(dog)=p(cat)=0.5 then it's fine, because it tells me the classified is uncertain. Isn't it?

1

HateRedditCantQuitit t1_j24zv0q wrote

You’re still implicitly saying that you’re 100% certain that it’s either a cat or a dog, which is wrong. If a horse picture has p(cat)=1e-5 and p(dog) = 1e-7, that should also be fine, right? And if you normalize those such that p(cat) + p(dog) = 1, you end up with basically p(cat)=1. Testing for (approximately) p(cat) = p(dog) when it can be neither is a messy way to go about doing calibration.

It’s just a long way of saying that having the probabilities not sum to one is fine.

4

ObjectManagerManager t1_j27i9n5 wrote

Actually, you're completely right. SOTA in open set recognition is still max logit / max softmax, which is to say that the maximum softmax probability is a useful measure of certainty.

1