Submitted by DreamyPen t3_zsbivc in MachineLearning
I have collected experimental data for various conditions. In order to ensure repeatability, each test is replicated 5 times: which means same input but slightly different output due to experimental variability.
If you were to build a machine learning algorithm, would you use all 5 data points for each given test, hoping that your algorithm will learn to converge towards the mean response? Or it is advisable to pre-compute the means and only feed these to the model? ( so that you ensure that one input can only have one output)
I can see pros and cons to both approches and would welcome feedback. Thank you.
gBoostedMachinations t1_j17a22i wrote
Don’t put very much weight at all into what other people’s intuitions are about these kinds of questions. Just test it. Your question is an empirical one so just do the experiment. I can’t tell you how many times I’ve had a colleague say that something I was trying wasn’t going to work only to see that he was dead wrong when I tested it anyway. Oh man do I love it when that happens.
EDIT: it just occurred to me that validation will be somewhat tricky. Does OP allow (non-overlapping) duplicates to remain in the validation set? Or does OP calculate the averages for the targets? He can’t do something different when comparing the models, yet one model will be clearly favored if he only chooses one method.
I think the answer to the question depends on how data about future targets will be collected. Is OP going to perform repeated experiments in the future and take repeated measurements of the outcome? Or is he only going to perform unique sets of experiments? Whatever the answer the important thing is for OP to consider the future use-case and process his validation set in a way that most closely mimics that environment (e.g., repeated measurements vs single measurements).
Sorry if this isn’t very clear I only had a few minutes to type it out.