I am trying to build an LSTM model to predict whether a stock is going up or down the next day. As you can see, a simple classification task that got me stuck for a couple of days now. I am selecting 3 features only to feed into my network, below I am showing my pre-processing:
# pre-processing, last column has values of either 1 or zero len(df.columns) # 32 columns index_ = len(df.columns) - 1 x = df.iloc[:,:index_] y = df.iloc[:,index_:].values.astype(int) Removing any nan values:
def clean_dataset(df): assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame" df.dropna(inplace=True) indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf, 'NaN', 'nan']).any(1) return df[indices_to_keep].astype(np.float64) df = clean_dataset(df) Then I am taking the 3 selected features and showing the shape for X and Y
selected_features = ['feature1', 'feature2', 'feature3'] x = x[selected_features].values.astype(float) # s.shape (44930, 3) # y.shape (44930, 1) Then I am splitting my dataset into 80/20
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=98 ) Here I am reshaping my data
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1) x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], 1) y_train = y_train.reshape(-1, 1) y_test = y_test.reshape(-1, 1) Here is the new shape of each one:
x_train.shape = (35944, 3, 1) x_test.shape = (8986, 3, 1) y_train.shape = (35944, 1) y_test.shape = (8986, 1) First sample of the x_train set Before reshaping
x_train[0] => array([8.05977145e-01, 4.92200000e+01, 1.23157152e+08]) First sample of the x_train set After reshaping
x_train[0] => array([[8.05977145e-01], [4.92200000e+01], [1.23157152e+08] ]) Making sure no nan values in my training set both x_train, and y_train:
for main_index, xx in enumerate(x_train): for i, y in enumerate(xx): if type(x_train[main_index][i][0]) != np.float64: print("Something wrong here:" ,main_index, i) else: print("done") # one done, got nothing wrong Finally I am training here LSTM
def build_nn(): model = Sequential() model.add(Bidirectional(LSTM(32, return_sequences=True, input_shape = (x_train.shape[1], 1), name="one"))) #. input_shape = (None, *x_train.shape) , model.add(Dropout(0.20)) model.add(Bidirectional(LSTM(32, return_sequences=False, name="three"))) model.add(Dropout(0.10)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.10)) model.add(Dense(1, activation='sigmoid')) opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) return model filepath = "bilstmv1.h5" chkp = ModelCheckpoint(monitor = 'val_accuracy', mode = 'auto', filepath=filepath, verbose = 1, save_best_only=True) model = build_nn() model.fit(x_train, y_train, epochs=15, batch_size=32, validation_split=0.1, callbacks=[chkp]) Here is CNN:
model.add(Conv1D(256, 3, input_shape = (x_train.shape[1], 1), activation='relu', padding="same")) model.add(BatchNormalization()) model.add(Dropout(0.15)) model.add(Conv1D(128, 3, activation='relu', padding="same")) model.add(BatchNormalization()) model.add(Dropout(0.15)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.15)) model.add(Dense(1)) model.add(Activation("sigmoid")) # opt = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, decay=0.01) # opt = SGD(lr=0.01) model.compile(loss='binary_crossentropy', optimizer='adamax', metrics=['accuracy']) All seems good until I start training, both val_loss and val_accuracy are NOT changing when training
Epoch 1/15 1011/1011 [==============================] - 18s 10ms/step - loss: 0.6803 - accuracy: 0.5849 - val_loss: 0.6800 - val_accuracy: 0.5803 Epoch 00001: val_accuracy improved from -inf to 0.58025, saving model to bilstmv1.h5 Epoch 2/15 1011/1011 [==============================] - 9s 9ms/step - loss: 0.6782 - accuracy: 0.5877 - val_loss: 0.6799 - val_accuracy: 0.5803 Epoch 00002: val_accuracy did not improve from 0.58025 Epoch 3/15 1011/1011 [==============================] - 9s 8ms/step - loss: 0.6793 - accuracy: 0.5844 - val_loss: 0.6799 - val_accuracy: 0.5803 Epoch 00003: val_accuracy did not improve from 0.58025 Epoch 4/15 1011/1011 [==============================] - 9s 9ms/step - loss: 0.6784 - accuracy: 0.5861 - val_loss: 0.6799 - val_accuracy: 0.5803 Epoch 00004: val_accuracy did not improve from 0.58025 Epoch 5/15 1011/1011 [==============================] - 9s 9ms/step - loss: 0.6796 - accuracy: 0.5841 - val_loss: 0.6799 - val_accuracy: 0.5803 Epoch 00005: val_accuracy did not improve from 0.58025 Epoch 6/15 1011/1011 [==============================] - 8s 8ms/step - loss: 0.6792 - accuracy: 0.5842 - val_loss: 0.6798 - val_accuracy: 0.5803 Epoch 00006: val_accuracy did not improve from 0.58025 Epoch 7/15 1011/1011 [==============================] - 8s 8ms/step - loss: 0.6779 - accuracy: 0.5883 - val_loss: 0.6798 - val_accuracy: 0.5803 Epoch 00007: val_accuracy did not improve from 0.58025 Epoch 8/15 1011/1011 [==============================] - 8s 8ms/step - loss: 0.6797 - accuracy: 0.5830 - val_loss: 0.6798 - val_accuracy: 0.5803 Epoch 00008: val_accuracy did not improve from 0.58025 I tried to change every single thing i saw here and there and nothing worked, I am sure I have no nan values in my data as i did remove them in the pre-processing steps. I tried to run CNN to check if it is related to LSTM or not and got the same thing (neither one of the 2 things are changing). Also, after trying different optimizers, nothing has changed. Any help is really appreciated.
Note: the predictions test has same values for all testing set (x_test), that tell us why the val_accuracy is not changing.
https://stackoverflow.com/questions/66719167/keras-val-loss-val-accuracy-are-not-changing March 20, 2021 at 03:21PM
没有评论:
发表评论