Viewing a single comment thread. View all comments

alkaway OP t1_j024u31 wrote

Thanks for your response -- This is an interesting idea! Unfortunately, I am actually training my network to predict 1000+ classes, for which such an idea would be computationally intractable...

2

trajo123 t1_j029y2r wrote

Ah, yes it doesn't really make sense for more than a couple of classes. So if you can't make your problem multi-class, have you tried any probability calibration on the model outputs? This should make them "more comparable", I think this is the best you can do with a deep learning model.

But why do you want to rank the outputs per pixel? Wouldn't some per-image aggregate over the channels make more sense?

3

alkaway OP t1_j02owfb wrote

Thanks so much for your response! Are you aware of any calibration methods I could try? Preferably ones which won't take long to implement / incorporate :P

2

trajo123 t1_j031wsx wrote

Perhaps scikit-learn's "Probability calibration" section would be a good place to start. Good luck!

2

LearnDifferenceBot t1_j02p3jr wrote

> won't to long

*too

Learn the difference here.


^(Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout to this comment.)

1