BlazeObsidian

BlazeObsidian t1_j6xbu8f wrote

You can try Kaggle notebooks and Google Colab notebooks but they don't persist for that long. They typically shut down after 6 hours. You'll have to periodically save your best model/hyperparameters but that might be a viable free option.

Google Colab also has a paid option where you can upgrade the RAM, GPU etc.. to meet your needs.

But I am curious as to why it's taking 21 hours. Have you checked in your course forums/discussions for the expected time ?

https://www.kaggle.com/

https://colab.research.google.com/

4

BlazeObsidian t1_j690uxs wrote

A lot of trees require their seeds to undergo stratification before they can germinate.

For example the Japanese maple seeds undergo cold stratification where the seeds fall to the ground and lay dormant there under the snow. Only after this process is done will they germinate.

Note that this is not a hard and fast rule. But seeds that don’t undergo stratification take longer and have much lower chances of germination

6

BlazeObsidian t1_j4vc61q wrote

That depends on the extent to which the pixel information is misaligned I think. If cropping your images is not a solution and a large portion of your images have this issue, the model wouldn't be able to generate the right pixel information for the misaligned sections. But it's worth giving a try with Palette if the misalignment is not significant.

2

BlazeObsidian t1_j4var74 wrote

Sorry, I was wrong. Modern deep VAE's can match SOTA GAN model performance for img superresolution(https://arxiv.org/abs/2203.09445) but I don't have evidence for recoloring.

But diffusion models are shown to outperform GAN's on multiple img-to-img translation tasks. Eg:- https://deepai.org/publication/palette-image-to-image-diffusion-models

You could probably reframe your problem as an image colorization task:- https://paperswithcode.com/task/colorization and the SOTA is still Palette linked above

1

BlazeObsidian t1_iv0hlz1 wrote

Haha. It might be overfitting now. How does it perform on the test set ? If the accuracy is > 90 on test set I would think it's a good model.

How does it perform on the test set ? If accuracy is bad on test data, you would have to reduce the Conv layers and see if you can get more data.

Can you post the train vs test loss and accuracy here ?

1

BlazeObsidian t1_iv027v3 wrote

The general idea is to start with larger shapes and proceed to smaller ones. So the network starts off wide and the size of the tensors gradually reduces.

So maybe you can do:-

Conv2D(64) -> MaxPooling -> Conv2D(32) -> MaxPooling -> Conv2D(8) -> MaxPooling -> Dense(relu) [Optional] -> Dense(softmax)

Start with one Conv layer first and see if there are any improvements. Gradually add layers. If adding layers isn't giving you much of an improvement in accuracy, I'd recommend checking your data.

Making the network too deep (too many layers) might result in overfitting. So the network architecture/size is also a hyperparameter that has an optimal value for the dataset. Do update how it goes

1

BlazeObsidian t1_iv006oo wrote

The network is a little simple for the task at hand. Consider using Conv layers and a deeper network. Doesn't have to be very large.

Flatten -> Conv -> Conv -> Dense (relu) -> Dense (softmax) might outperform the current network.

As a general approach, CNN's are very effective with image data. Passing the CNN output to Dense layers helps adapt the image understanding to the task at hand.

3

BlazeObsidian t1_ium8sru wrote

I haven’t tried out the performance yet but it appears PyTorch supports the apple silicon processors now as a separate device named ‘mps’ similar to cuda for Nvidia gpus. There is also a tensor flow plugin that can be separately installed to take advantage of the apple chips

47

BlazeObsidian t1_iujbbdu wrote

Are you sure you model is running on the GPU ? See https://towardsdatascience.com/pytorch-switching-to-the-gpu-a7c0b21e8a99 or if you can see GPU utilisation it might be simpler to verify.

If you are not explicitly moving your model to the GPU I think it's running on the CPU. Also how long is it taking ? Do you have a specific time that you compared the performance with ?

1

BlazeObsidian t1_is5ljbb wrote

I think the principle is similar with a caveat. The Thyroid doesn't differentiate between radioactive Iodine and the non-radioactive one when it absorbs it (I could be wrong. I assumed that radioisotopes are biologically processed the same).

On the other hand, there is a higher preference for Ethanol to be metabolised rather than Methanol.

11

BlazeObsidian t1_is4wwan wrote

From what I gather, it's technically Potassium Iodide, (KI) rather than elemental Iodine that is prescribed for radiation exposure.

The CDC's page on this goes into more detail.

Basically, it can only protect the Thyroid from injury due to radioactive Iodine. It's ineffective when there is no radioactive Iodine and doesn't protect other organs.

The way it works is by supplying enough non-radioactive Iodine to your Thyroid that it can't absorb any of the radioactive Iodine that you are exposed to.

4