Submitted by Murii_ t3_10l06xg in deeplearning

Hello Guys,

i am writing a thesis in a company about Image Classification with Convolutional neural networks. The Images Contain a part of a microship, where a crack is visible or if the microchip is okay, then not. How can i build a CNN with such a small dataset? Is that even possible? I thought about maybe using datasets with cracks from the internet, add a image threshold and train my network with them. But i also read about pre-trained neural networks.. Are they maybe a option too?

2

Comments

You must log in or register to comment.

Practical_Square4577 t1_j5u1yxi wrote

Give it a try with data augmentation. (And don't forget to split you dataset into a train set and a test set).

For example flip and rotate will multiply your number of images by 12.

Create a black and white version will multiply by an extra factor of 2.

And then you can go with random crops, random rotations, random colour modifications, random shear, random scaling.

This will give you a potentially infinite amount of image variation.

You can also use dropout as part of your network to avoid overfiting.

And on top of that, remember that when working with convolutional neural networks, an image is not a single datapoint. Each pixel (and it's attached neighbourhood) is a datapoint, so you potentially have thousands of training sample per image depending on the receptive field of your CNN.

One thing to be careful about when designing you data augmentation pipeline is to make sure the chip / crack is visible after the cropping, so make sure to visually check what you feed into your network.

5

ShadowStormDrift t1_j5ufflp wrote

With 100 images all data augmentation is going to give you is an overfit network.

You do not have enough images. Try get a few thousand then maybe you'll get results that aren't complete bullshit.

Speak to whoever is funding this. 100 images to solve a non trivial problem is a joke.

7

AtmarAtma t1_j5unz1i wrote

Is it a case that you have only 100 images of crack? Or is it you have 100 images with crack and without crack? For a similar problem microscratch or scratch, it is quite common to get only handful of defective wafers but plenty of wafers are available without that defect class.

1

emad_eldeen t1_j5uvdma wrote

One way is to use data augmentation to increase the samples size.

The other way is also to use another dataset that can be available online with more samples, consider it as a source domain, and use it to train your CNN model. Then you can use either Transfer learning or semi-supervised domain adaptation to adapt the model to your target domain.

2

Internal-Diet-514 t1_j5vbxjl wrote

I would try without data augmentation first. You need a baseline to understand what helps and what doesn’t to increase performance. If there is a strong signal that can differentiate between the classes, 100 images may be enough. The amount of data you need is problem dependent it’s not a one size fits all. As others have said make sure youre splitting into train and test sets to evaluate performance and that each has a distribution of classes similar to the overall population (matters if you have an imbalanced dataset). Keep the network lightweight if you’re not using transfer learning and build it up from there. At a certain point it will overfit but it will most likely happen faster the larger your network is.

2

manojs t1_j5wbyum wrote

With such a small dataset, you should use pre-existing classification models most similar to your data (search huggingface), and then re-train just the last layer or last couple of layers ("freeze" all the prior layers). And yes you can use the data augmentation suggestion but if you build the entire network from scratch it will be challenging to get good results. AKA "transfer learning".

1

chatterbox272 t1_j5wfrwp wrote

You can try. Augment hard, use a pretrained network and freeze everything except the last layer, and don't let anyone actually try and deploy this thing. 100 images is enough to do a small proof-of-concept, but nothing more than that.

1

suflaj t1_j5wgdsj wrote

One thing people haven't mentioned is you could create synthetic images via 3D modelling. If you can get someone to set up realistic 3D models of those microchips, and then randomly generate cracks, you can get a pretty good baseline model you can then finetune on real data.

There are companies which could do that too but I'm not that sure that the price would be approachable, or if outsourcing it is a viable solution given trade secrets. But ex. Datagen is a company that can do it.

1

Tall_Help_1925 t1_j5xl2i4 wrote

It depends on how similar the images are and on the amount of defect data, i.e. images containing cracks. You can rephrase it as image segmentation problem and use U-nets (without attention) as model. Due to the limited receptive fields of the individual "neurons" the dataset will effectively be much larger. If the input data is already aligned you could also try using the difference in feature space, i.e. calculate the difference in the activations of a pretrained network for non-defective images and the current image. I'd suggest using cosine distance.

1