Issue with BERT Preprocessor model in TF2 and python

80 Views Asked by At

I am trying to use BERT to do a text classification project. However I keep running into this error `

ValueError                                Traceback (most recent call last)
Cell In[37], line 4
      2 text_input = tf.keras.Input(shape=(), dtype=tf.string, name='text')
      3 bert_preprocess = hub.KerasLayer(preprocess_url, name='preprocessing')
----> 4 preprocessed_text = bert_preprocess(text_input)
      5 bert_encoder = hub.KerasLayer(encoder_url, 
      6                               trainable=True, 
      7                               name='BERT_encoder')
      8 outputs = bert_encoder(preprocessed_text)
ValueError: Exception encountered when calling layer 'preprocessing' (type KerasLayer).
A KerasTensor is symbolic: it's a placeholder for a shape an a dtype. It doesn't have any actual numerical value. You cannot convert it to a NumPy array.

Call arguments received by layer 'preprocessing' (type KerasLayer):
  • inputs=<KerasTensor shape=(None,), dtype=string, sparse=None, name=text>
  • training=None

A KerasTensor is symbolic: it's a placeholder for a shape an a dtype. It doesn't have any actual numerical value. You cannot convert it to a NumPy array.


when building this model:


preprocess_url = 'https://www.kaggle.com/models/tensorflow/bert/frameworks/TensorFlow2/variations/en-uncased-preprocess/versions/3'
encoder_url = 'https://www.kaggle.com/models/tensorflow/bert/frameworks/TensorFlow2/variations/bert-en-uncased-l-12-h-768-a-12/versions/2'

# Bert Layers
text_input = tf.keras.Input(shape=(), dtype=tf.string, name='text')
bert_preprocess = hub.KerasLayer(preprocess_url, name='preprocessing')
preprocessed_text = bert_preprocess(text_input)
bert_encoder = hub.KerasLayer(encoder_url, 
                              trainable=True, 
                              name='BERT_encoder')
outputs = bert_encoder(preprocessed_text)

# Neural network layers
l = tf.keras.layers.Dropout(0.1)(outputs['pooled_output'])
l = tf.keras.layers.Dense(num_classes, activation='softmax', name='output')(l)

# Construct final model
model = tf.keras.Model(inputs=[text_input], outputs=[l])

I've watched countless tutorials and even used the ones on the tensorflow docs put even when I copy and paste, they still don't work. I've tried different versions of tf, tf-text, and tf-hub. I am using the tensorflow-gpu-jupyter docker container for this project.

Here is how i'm installing the libraries:

!pip install "tensorflow-text"
!pip install "tf-models-official"
!pip install "tensorflow-hub"

The versions are: Tensorflow: 2.16.1 tensorflow-text: 2.16.1 tensorflow-hub: 0.16.1

All the other forums I saw with this issue said to do tf.config.run_functions_eagerly(True) but this didn't work.

Anything will help. Please answer if you know how to solve.

1

There are 1 best solutions below

0
jo57 On

I am quite definitely not an expert either, but I have had the same issue. I resolved the issue by using a slightly older version of tf using:

!pip install -U "tensorflow-text==2.15.*"
!pip install -U "tf-models-official==2.15.*"

For reference, I am running scripts on Google Colab (I know this sometimes has its own little ways..).