Academic project named molecule design

53 Views Asked by At

It is based on deep learning techniques. Now I am dealing with the cnn model having 'Smiles' as an input. I had to encode it, but I always have an accuracy equal to 0. What should I do?

This is the algorithm I am talking about:

from keras.models import Sequential
from keras.layers import Conv1D, MaxPooling1D, Flatten, Dense,Dropout,Activation
import keras

print("TensorFlow version:", tf.__version__)
print("Keras version:", keras.__version__)

# create a Sequential model
model = Sequential()

# add a 1D convolutional layer with 32 filters, a kernel size of 3, and a ReLU activation
model.add(Conv1D(filters=29, kernel_size=3, activation='relu', input_shape=(29, 1)))

# add a max pooling layer with a pool size of 2
model.add(MaxPooling1D(pool_size=1))

# add a fully connected layer with 64 units and a ReLU activation
model.add(Dense(units=64, activation='linear'))

# add the output layer with a single unit and a sigmoid activation (for binary classification)
#model.add(Dense(units=1, activation='linear'))
model.add(Dense(1))
model.add(Activation('linear'))
# compile the model with binary crossentropy loss and the Adam optimizer

#model.compile(loss='mape', optimizer= 'rmsprop',metrics=['accuracy'])
model.compile(loss='mape', optimizer='Adam', metrics=['accuracy',percentage_difference])

# train the model with input data and labels
# history = model.fit(X_train ,y_train, epochs=10, batch_size=64, validation_split=0.2,verbose=1)
model.fit(X_train, y_train, batch_size = 128, epochs=5, validation_data=(X_val, y_val))

This is the result I got:

TensorFlow version: 2.11.0
Keras version: 2.11.0
Epoch 1/5
10501/10501 [==============================] - 72s 7ms/step - loss: 100.3061 - accuracy: 0.0000e+00 - percentage_difference: 100.3061 - val_loss: 100.0071 - val_accuracy: 0.0000e+00 - val_percentage_difference: 100.0071
Epoch 2/5
10501/10501 [==============================] - 85s 8ms/step - loss: 99.9942 - accuracy: 0.0000e+00 - percentage_difference: 99.9941 - val_loss: 100.4263 - val_accuracy: 0.0000e+00 - val_percentage_difference: 100.4262
Epoch 3/5
10501/10501 [==============================] - 80s 8ms/step - loss: 99.9968 - accuracy: 0.0000e+00 - percentage_difference: 99.9968 - val_loss: 100.1386 - val_accuracy: 0.0000e+00 - val_percentage_difference: 100.1386
Epoch 4/5
10501/10501 [==============================] - 76s 7ms/step - loss: 100.0059 - accuracy: 0.0000e+00 - percentage_difference: 100.0059 - val_loss: 99.9903 - val_accuracy: 0.0000e+00 - val_percentage_difference: 99.9903
Epoch 5/5
10501/10501 [==============================] - 71s 7ms/step - loss: 100.0111 - accuracy: 0.0000e+00 - percentage_difference: 100.0111 - val_loss: 100.0600 - val_accuracy: 0.0000e+00 - val_percentage_difference: 100.0599
1

There are 1 best solutions below

1
On

Your comment says you are using cross-entropy, but your code uses mape

Is there a reason for this?

Actually there are quite a few of those conflicts!

e.g.

# add a 1D convolutional layer with 32 filters, a kernel size of 3, and a ReLU activation
model.add(Conv1D(filters=29, kernel_size=3, activation='relu', input_shape=(29, 1)))

and

# add a max pooling layer with a pool size of 2
model.add(MaxPooling1D(pool_size=1))

and

# add a fully connected layer with 64 units and a ReLU activation
model.add(Dense(units=64, activation='linear'))