I have made a simple model to do text classification using DistilBERT. The problem is I am unable to figure out how to do cross-validation while training. My code implementation is provided below.
Can anyone help me to implement cross-validation while training?
Thank you in advance.
#Split into Train-Test-Validation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.10, random_state = 0)
X_val, X_test, y_val, y_test = train_test_split(X_test,y_test, test_size=0.10, random_state=42)
#Encoding text for train data
train_encoded = tokenizer(X_train, truncation=True, padding=True, return_tensors="tf")
train_data = tf.data.Dataset.from_tensor_slices((dict(train_encoded), y_train))
#Encoding text for validation data
val_encoded = tokenizer(X_val, truncation=True, padding=True, return_tensors="tf")
val_data = tf.data.Dataset.from_tensor_slices((dict(val_encoded), y_val))
#Encoding text for testing data
test_data = tf.data.Dataset.from_tensor_slices((dict(test_encoded), y_test))
test_encoded = tokenizer(X_test, truncation=True, padding=True, return_tensors="tf")
#Load distil bert model
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=2)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])
model.fit(train_data.batch(16), epochs=10, batch_size=16)
I suggest using K-fold validation as a cross-evaluation strategy!
As an alternative way, you can wrap your model with sklearn-api support after that enjoy cross-validation and dozens of other utilities offered by sklearn!