Submitted by redditnit21 t3_y5qn9h in deeplearning
redditnit21 OP t1_isn2joq wrote
Reply to comment by DrXaos in Testing Accuracy higher than Training Accuracy by redditnit21
I am using a stratified test/train split. "train_df, test_df = model_selection.train_test_split(
df, test_size=0.2, random_state=42, stratify=df['Class']
)"
All the classes are equally proportioned except 1 class. I am using dropout layer in the model for training. Is the dropout layer creating this issue?
DrXaos t1_isn3k9e wrote
Certainly could be dropout. Dropout is on during training, stochastically perturbing activations in its usual form in packages, and off during test.
Take out dropout, use other regularization and report directly on your optimized loss function, train and test, often NLL if you're using a conventional softmax + CE loss function which is the most common for multinomial outcomes.
redditnit21 OP t1_isn465e wrote
>Views
Yeah I am using conventional softmax + CE loss function which is the most common for multinomial outcomes. Which regularization method would you suggest me to use and what's the main reason why test acc should be less than train acc?
DrXaos t1_isn4fs5 wrote
top 1 accuracy is a noisy measurement particularly if it's a binary 0/1 measurement.
A continuous performance statistic will more likely show the expected behavior of train perf better than test. Note on loss functions lower is better.
There's lots of regularization possible, but start with L2, weight decay, and/or limiting the size of your network.
Viewing a single comment thread. View all comments