ThrowThisShitAway10
ThrowThisShitAway10 t1_iso8i7p wrote
Reply to [R] Embedding dates ? by MichelMED10
What seems to matter here is not the dates but rather the amount of time between scans, right?
ThrowThisShitAway10 t1_iso6km8 wrote
Reply to comment by ABCDofDataScience in [D] Simple Questions Thread by AutoModerator
This is a feature of Python, not just PyTorch. We use the super function because we want our class to inherit the attributes of it's parent. For your PyTorch module to work, you have to inherit from the nn.Module class. It's not a big deal
ThrowThisShitAway10 t1_iso5tdz wrote
Reply to [D] Clustering after instance segmentation by vocdex
Sounds reasonable to me. I just wouldn't do the PCA on the image data directly, I would feed the cropped images through a pre-trained Res-Net backbone or something and then use PCA/tSNE on those embeddings.
ThrowThisShitAway10 t1_iso59z3 wrote
Reply to [P] I built densify, a data augmentation and visualization tool for point clouds by jsonathan
What's the point? No pun intended.
ThrowThisShitAway10 t1_isdsqd5 wrote
Yes of course. A lot of compression is moving towards AI-based methods because they can be a lot better.
There is actually an explicit connection between AI and compression. It is believed that advanced methods to compress text are equivalent to the AGI problem. There's even a million dollar prize for anyone who can make progress in this domain: https://en.wikipedia.org/wiki/Hutter_Prize
ThrowThisShitAway10 t1_isdsaq4 wrote
Reply to [D] Could diffusion models be succesfully trained to reverse distortions other than noise? by zergling103
As others have mentioned, Cold Diffusion proved this.
ThrowThisShitAway10 t1_isds7hu wrote
Reply to comment by whydontigetbetter01 in [D] Simple Questions Thread by AutoModerator
https://developers.google.com/ml-kit/vision/pose-detection
this is exactly what you need
ThrowThisShitAway10 t1_isbqlfo wrote
Reply to comment by Antique_Appearance62 in [D] Simple Questions Thread by AutoModerator
Could you elaborate more? What are these samples?
ThrowThisShitAway10 t1_is90ox7 wrote
Reply to comment by Lajamerr_Mittesdine in [D] Simple Questions Thread by AutoModerator
There's some papers on this. They usually refer to these commands as a "domain-specific language". I know of this article https://arxiv.org/pdf/2006.08381.pdf where they define some basic functions to start and then it attempts to learn higher-order functions while building a program to solve a specified task.
There was an interesting Kaggle competition a few years back by Francois Chollet where competitors had to come up with a method that can generate short programs to solve simple tasks. https://www.kaggle.com/competitions/abstraction-and-reasoning-challenge It ended up being quite challenging
ThrowThisShitAway10 t1_is5onty wrote
Reply to comment by BAMFmartinFTW in [D] Simple Questions Thread by AutoModerator
I think the data would be rather noisy, and you'd have to collect a lot of it.
It would be nice if you could collect the sensor data from the single sensor in the middle of the cargo as well as the camera data. This way you have a good prior (approximation) for the weight. So instead of trying to predict the weight using camera data alone, you just have to predict the difference between the sensor weight and the true weight.
ThrowThisShitAway10 t1_is5n9pc wrote
Reply to comment by liljontz in [D] Simple Questions Thread by AutoModerator
- Have a dataset and a model with trainable weights (neural network)
- input data -> network -> prediction data
- loss = loss function(prediction, truth)
- Perform backpropagation with the loss to update the weights in the neural network. Over time this will minimize the loss and allow the model to "learn" from the data and truth values you provide
The input data could be images of animals and the truth might be a classification on what kind of animal ("dog", "cat", "pig").
ThrowThisShitAway10 t1_is5jqcy wrote
Reply to [D] Are GAN(s) still relevant as a research topic? or is there any idea regarding research on generative modeling? by aozorahime
People have already got all the low-hanging fruit for GANs. Right now people are doing the same thing with diffusion models. So as long as you are okay with that, yeah they are still a relevant research topic. You might just have a harder time succeeding.
ThrowThisShitAway10 t1_is0kgkx wrote
Reply to comment by mardabx in [D] Simple Questions Thread by AutoModerator
Could you rephrase your question? Do you mean something like characterizing a physical system by deep learning on input images?
ThrowThisShitAway10 t1_is0ka24 wrote
Reply to comment by Normal_Flan_1269 in [D] Simple Questions Thread by AutoModerator
Yes, lots of statistics depts. participate in machine learning. They will have a slightly different approach than CS people though.
ThrowThisShitAway10 t1_iqvtrz7 wrote
Reply to comment by Imaginary_Carrot4092 in [D] Model not learning data by Imaginary_Carrot4092
Oh... then I'm not sure what you're expecting to learn. There doesn't appear to be much (if any) correlation between your input and output values. If you provide a 0.0 as input to the network, how is it supposed to predict an output? There's no indication whether the value should be 3.0 or 4.0, so it will always just predict around the mean.
This one input feature is pretty useless. The ideal model is just y=3.5 and doesn't include x at all. If you're able to provide more input features that actually correlate with the output, then you'll get an interesting model.
ThrowThisShitAway10 t1_iqviiv2 wrote
Reply to [D] Model not learning data by Imaginary_Carrot4092
What loss are you using? It seems to be around 0.1, yet in your image the predictions are clearly worse than 0.1 MAE. I'm guessing there's some bug in your code
ThrowThisShitAway10 t1_isyfpmq wrote
Reply to [D] Solving energy minimization problems using neural networks by joeggeli
Isn't this just considered self-supervised learning? Like in an autoencoder you also have a loss L = G(x, y'). I don't see why it wouldn't work.