Viewing a single comment thread. View all comments

viertys OP t1_je9mocy wrote

I have an accuracy of 98.50 and I have dice of around 0.30-0.65 for each image

1

deep-yearning t1_je9qqrf wrote

Accuracy is not a good metric here given the large number of true negative pixels you will get.

How large is the typical region you are trying to segment (in pixels)? If you've already done data augmentation I would also try to generate images if you can. Use a larger batch size, try different optimizers and a learning rate scheduler. How many images do not have cavities in them?

1

viertys OP t1_je9srha wrote

All images have cavities in them and in general the cavities make up 5-10% of the image.

Here is an example: https://imgur.com/a/z0yeH0C The mask on the left is the ground truth and the mask on the right is the predicted one.

​

I'm currently using Kaggle and I can't use very large batch sizes. My batch size is 4 now. Is there an alternative to Kaggle that you would suggest?

1

deep-yearning t1_je9te4j wrote

Train locally on your own machine if you have a GPU, or try using google colab if you don't. Google Colab has V100 which should fit larger batch sizes.

To be honest, given how limited the data set is and how small some of the segmentation masks are, I am not sure other architectures will be able to do any better than U-Net.

I would also try the nnU-Net which should give state-of-the-art performance, and so will give you a good idea of what's possible with the dataset that you have: https://github.com/MIC-DKFZ/nnUNet

1

viertys OP t1_je9u6ny wrote

Thank you, I will try nnU-net too

1