Submitted by AutoModerator t3_xznpoh in MachineLearning
Narigah t1_isap81a wrote
Hello guys, I'm quite new to Machine Learning but I have a kind of challenge for an academic paper.
My data is a time series and I have to make predictions about specific positions in the time series. As an example, I have an array of floats with 350 positions, there is a pattern to certain positions that I need my model to figure out, based on their values and the surrounding values. In my train examples I would have the array of floats and the correct marked positions (e.g. position 35, 86, 150, 240, 351). It doesn't need to always get the exact position, but it should get as closer as possible.
Do you guys know of anything similar to this so I can study about it? Or do you recommend any approach? I'm kinda stuck on figuring out how to ascertain the loss and the precision, as it doesn't need to meet the exact position of the label, just to be as close as possible.
Thanks in advance for any help!
seiqooq t1_it443kx wrote
I think you’re just about there with an answer. Assuming each occurrence is weighted evenly you could approach this a few ways:
-
Use binary labeling such that the output vector looks like [0,0,0,01,0,0…, 1] and is of length 350. You can think of this as representing the true goal of finding the exact positions. Then, during optimization, you can determine a threshold or other logic to handle all of the fuzzy predictions that will inevitably result from training.
-
Assign fuzzy labels scaling inversely with the distance from the target point. EG [0, 0.1, 0.5, 1, 0.5, 0.1, 0…]. The same thresholding can be done here as well.
Assuming locale is important for classification, I’d consider using convolutions as well to extract useful information from neighboring data points.
Viewing a single comment thread. View all comments