BlazeObsidian
BlazeObsidian t1_j6xbu8f wrote
Reply to [D] PC takes a long time to execute code, possibility to use a cloud/external device? by Emergency-Dig-5262
You can try Kaggle notebooks and Google Colab notebooks but they don't persist for that long. They typically shut down after 6 hours. You'll have to periodically save your best model/hyperparameters but that might be a viable free option.
Google Colab also has a paid option where you can upgrade the RAM, GPU etc.. to meet your needs.
But I am curious as to why it's taking 21 hours. Have you checked in your course forums/discussions for the expected time ?
BlazeObsidian t1_j6e5z67 wrote
Reply to A tiny, moving point of light. Copies of the Photographs use for the discovery of Pluto. Credit: Lowell Observatory. January 1930 by Aeromarine_eng
I'm curious now. Are these actual sizes of the photographs ? If yes that's amazing attention to detail.
BlazeObsidian t1_j690uxs wrote
Reply to Are there any species of plant that require seasonal temperature drops as part of their life-cycle? by I3P
A lot of trees require their seeds to undergo stratification before they can germinate.
For example the Japanese maple seeds undergo cold stratification where the seeds fall to the ground and lay dormant there under the snow. Only after this process is done will they germinate.
Note that this is not a hard and fast rule. But seeds that don’t undergo stratification take longer and have much lower chances of germination
BlazeObsidian t1_j4vc61q wrote
Reply to comment by kingdroopa in [D] Suggestion for approaching img-to-img? by kingdroopa
That depends on the extent to which the pixel information is misaligned I think. If cropping your images is not a solution and a large portion of your images have this issue, the model wouldn't be able to generate the right pixel information for the misaligned sections. But it's worth giving a try with Palette if the misalignment is not significant.
BlazeObsidian t1_j4var74 wrote
Reply to comment by kingdroopa in [D] Suggestion for approaching img-to-img? by kingdroopa
Sorry, I was wrong. Modern deep VAE's can match SOTA GAN model performance for img superresolution(https://arxiv.org/abs/2203.09445) but I don't have evidence for recoloring.
But diffusion models are shown to outperform GAN's on multiple img-to-img translation tasks. Eg:- https://deepai.org/publication/palette-image-to-image-diffusion-models
You could probably reframe your problem as an image colorization task:- https://paperswithcode.com/task/colorization and the SOTA is still Palette linked above
BlazeObsidian t1_j4v495i wrote
Reply to [D] Suggestion for approaching img-to-img? by kingdroopa
Autoencoders like VAE’s should work better than any other models for image to image translation. Maybe you can try different VAE models and compare their performance
I was wrong.
BlazeObsidian t1_iv0hlz1 wrote
Reply to comment by mikef0x in [R] Keras image classification high loss by mikef0x
Haha. It might be overfitting now. How does it perform on the test set ? If the accuracy is > 90 on test set I would think it's a good model.
How does it perform on the test set ? If accuracy is bad on test data, you would have to reduce the Conv layers and see if you can get more data.
Can you post the train vs test loss and accuracy here ?
BlazeObsidian t1_iv027v3 wrote
Reply to comment by mikef0x in [R] Keras image classification high loss by mikef0x
The general idea is to start with larger shapes and proceed to smaller ones. So the network starts off wide and the size of the tensors gradually reduces.
So maybe you can do:-
Conv2D(64) -> MaxPooling -> Conv2D(32) -> MaxPooling -> Conv2D(8) -> MaxPooling -> Dense(relu) [Optional] -> Dense(softmax)
Start with one Conv layer first and see if there are any improvements. Gradually add layers. If adding layers isn't giving you much of an improvement in accuracy, I'd recommend checking your data.
Making the network too deep (too many layers) might result in overfitting. So the network architecture/size is also a hyperparameter that has an optimal value for the dataset. Do update how it goes
BlazeObsidian t1_iv006oo wrote
Reply to [R] Keras image classification high loss by mikef0x
The network is a little simple for the task at hand. Consider using Conv layers and a deeper network. Doesn't have to be very large.
Flatten -> Conv -> Conv -> Dense (relu) -> Dense (softmax) might outperform the current network.
As a general approach, CNN's are very effective with image data. Passing the CNN output to Dense layers helps adapt the image understanding to the task at hand.
BlazeObsidian t1_iunmgfn wrote
Reply to comment by papinek in [D] Machine learning prototyping on Apple silicon? by laprika0
Hmm. Might give it a try. Usually I use colab. If there isn’t much of a difference during inference, local is better
BlazeObsidian t1_iumubsj wrote
Reply to comment by papinek in [D] Machine learning prototyping on Apple silicon? by laprika0
Did you run into memory issues ? I assumed it wouldn’t work with only 8 gigs unified memory.
BlazeObsidian t1_ium8sru wrote
I haven’t tried out the performance yet but it appears PyTorch supports the apple silicon processors now as a separate device named ‘mps’ similar to cuda for Nvidia gpus. There is also a tensor flow plugin that can be separately installed to take advantage of the apple chips
BlazeObsidian t1_iujbbdu wrote
Reply to comment by alexnasla in [D] When the GPU is NOT the bottleneck...? by alexnasla
Are you sure you model is running on the GPU ? See https://towardsdatascience.com/pytorch-switching-to-the-gpu-a7c0b21e8a99 or if you can see GPU utilisation it might be simpler to verify.
If you are not explicitly moving your model to the GPU I think it's running on the CPU. Also how long is it taking ? Do you have a specific time that you compared the performance with ?
BlazeObsidian t1_iu332zx wrote
Reply to comment by azuth89 in If Robert waldow became so tall because of an excess of hgh, then why does taking hgh not make you taller? by Komoasks
Wait, can you elaborate on putting on bone density in response to stress ? Never heard of that one before.
BlazeObsidian t1_is6ggpz wrote
Reply to comment by regular_modern_girl in Why do people take iodine pills for radiation exposure? by Furrypocketpussy
Thank you for confirming. Interesting to know about Hydrogen isotopes though. Knew that H2O and D2O differ quite a bit in chemical properties but never knew how much magnified it is at the biochemical scale.
BlazeObsidian t1_is5ljbb wrote
Reply to comment by NecessarySpare4930 in Why do people take iodine pills for radiation exposure? by Furrypocketpussy
I think the principle is similar with a caveat. The Thyroid doesn't differentiate between radioactive Iodine and the non-radioactive one when it absorbs it (I could be wrong. I assumed that radioisotopes are biologically processed the same).
On the other hand, there is a higher preference for Ethanol to be metabolised rather than Methanol.
BlazeObsidian t1_is4wwan wrote
From what I gather, it's technically Potassium Iodide, (KI) rather than elemental Iodine that is prescribed for radiation exposure.
The CDC's page on this goes into more detail.
Basically, it can only protect the Thyroid from injury due to radioactive Iodine. It's ineffective when there is no radioactive Iodine and doesn't protect other organs.
The way it works is by supplying enough non-radioactive Iodine to your Thyroid that it can't absorb any of the radioactive Iodine that you are exposed to.
BlazeObsidian t1_j8vl4hj wrote
Reply to [R] Looking for papers which are modified variational autoencoder (VAE) by Sandy_dude
Not sure if it matches your requirements but look into VQ-VAE which is basically a vector quantised VAE. https://ml.berkeley.edu/blog/posts/vq-vae/
Some more ideas are explored in more detail here: https://lilianweng.github.io/posts/2018-08-12-vae/