Submitted by International_Deer27 t3_10qhscf in deeplearning
BlacksmithNo4415 t1_j6tfpkg wrote
Reply to comment by International_Deer27 in Loss function fluctuating by International_Deer27
try using markdowns:
​
plotter = DLPlotter() # add this line
model = MyModel()
...
total_loss = 0
for epoch in range(5):
for step, (x, y) in enumerate(loader):
...
output = model(x)
loss = loss_func(output, y)
total_loss += loss.item()
...
config = dict(lr=0.001, batch_size=64, ...)
plotter.collect_parameter("exp001"", config, total_loss / (5 * len(loader)) # add this line
plotter.construct() # add this line
International_Deer27 OP t1_j6usb0i wrote
I’m not sure about the DLPlotter, which library did you get it from, I can’t seem to find it? I’m using Python 3
BlacksmithNo4415 t1_j6usty4 wrote
no, that was an example code to show you how much better the code is readable when you use markdowns..
DLPlotter is a library i am building in the moment.. :)
International_Deer27 OP t1_j6uufti wrote
Ah alright, thanks, I’ll try and see how else I can modify the code and get it working. Good luck with the library!
International_Deer27 OP t1_j6x0tpy wrote
I've simplified my model a lot to only take into account 2000x1 tensors as input for X and the prediction is either 0 or 1 as before. I've made it using nn.Sequential with only a few layers to be easier to follow:
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib as plt
df_Y_MACE = np.array(df_Y_MACE)
df_X_MACE1 = []
for i in range(len(df_X_MACE)):
df_X_MACE1.append(df_X_MACE[i][0])
df_X_MACE1 = np.array(df_X_MACE1)
X = torch.from_numpy(df_X_MACE1).float()
Y = torch.from_numpy(df_Y_MACE).float()
# Define the dataset
class ECGDataset(Dataset):
def __init__(self, data, labels):
self.data = data
self.labels = labels
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx], self.labels[idx]
# Split the data into training and testing sets
train_data, test_data, train_labels, test_labels = train_test_split(X, Y, test_size=0.8)
# Create the dataset and data loader
train_dataset = ECGDataset(train_data, train_labels)
test_dataset = ECGDataset(test_data, test_labels)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
# Define the CNN
class ECGClassifier(nn.Module):
def __init__(self):
super(ECGClassifier, self).__init__()
self.ECG_seq = nn.Sequential(nn.Conv1d(1, 32, kernel_size = 50, stride = 5), nn.ReLU(), nn.MaxPool1d(7,2), nn.Linear(193,1))
self.fc = nn.Linear(32, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = x.unsqueeze(1)
out = self.ECG_seq(x)
out = self.fc(out.view(-1,32))
out = self.sigmoid(out)
return out
# Define the model and move it to the device
device = torch.device('cpu')
model = ECGClassifier()
model = model.to(device)
model = model.float()
# Define the loss function and optimizer
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=0.01)
total_loss = []
# Train the model
for epoch in range(5):
for i, (data, labels) in enumerate(train_loader):
data, labels = data.to(device), labels.to(device)
# Forward pass
with torch.set_grad_enabled(True):
outputs = model(data)
labels = labels.unsqueeze(1)
loss = criterion(outputs, labels)
total_loss.append(loss)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 5, loss.item()))
BlacksmithNo4415 t1_j6x1ugb wrote
International_Deer27 OP t1_j6x0yff wrote
For this new model the loss function looks pretty much the same:
Epoch [1/5], Loss: 0.8073
Epoch [2/5], Loss: 0.8680
Epoch [3/5], Loss: 0.5826
Epoch [4/5], Loss: 0.7626
Epoch [5/5], Loss: 0.6099
BlacksmithNo4415 t1_j6x2xia wrote
i've checked for papers that do exactly what you want.
so as I assumed this data is time sensitive and therefor you need an additional temporal dimension.
this model needs to be more complex in order to solve this problem.
i suggest reading this:
https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01736-y
​
BTW: have you tried grid search for finding the right hyperparametrs?
oh and your model does improve..
have you increased the data set size??
Viewing a single comment thread. View all comments