2021年3月3日星期三

how to change batch size while training in fit method() and image_dataset_from_directory()?

I want to change the batch size during training, and I tried the answers from this Stakoverflow answer and I ended up writing the following code:

LR = 0.01  epoch = 1  batch_size = 100    for z,batch_size in zip(range(1,11),range(100,150,5)):      LR = batch_size/((z+1)*1000)    LR=LR/3      print("\n\nepoch {z}, Learning Rate {LR}, Batch Size  {batch_size} "                      .format(z=z,LR=LR,batch_size=batch_size))      def set_LR(epoch,lr):      global LR      return LR      call = [ LearningRateScheduler(set_LR,verbose=1) ]        model.fit(                      train_ds,              validation_data=val_ds,              epochs=epoch,              batch_size=batch_size,              steps_per_epoch=10,              callbacks=call                )  

Now, it'd work only if the batch size was not used somewhere else

  train_ds = tf.keras.preprocessing.image_dataset_from_directory(    data_dir,    validation_split=0.2,    subset="training",    seed=123,    image_size=(img_height, img_width),    batch_size=batch_size)  

In image_dataset_from_directory() function we've to define batch size and when you do so your batch size in fit() method is irrelevant, as I learned from the following Stackverflow post

What do you suggest, how should I solve this problem? All the attention is appreciated.

https://stackoverflow.com/questions/66468418/how-to-change-batch-size-while-training-in-fit-method-and-image-dataset-from-d March 04, 2021 at 12:04PM

没有评论:

发表评论