Flair Framework with PyTorch - OverflowError: int too big to convert

623 Views Asked by At

I'm trying to train a named entity recognition model with Flair Framework (https://github.com/flairNLP/flair), with this embedding: TransformerWordEmbeddings('emilyalsentzer/Bio_ClinicalBERT'). However, it always failed with OverflowError: int too big to convert. This is also happening in some other transformer word embedding such as XLNet. However, BERT and RoBERTa works fine.

Here is the full traceback of the error:

2021-04-15 09:34:48,106 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,106 Corpus: "Corpus: 778 train + 259 dev + 260 test sentences"
2021-04-15 09:34:48,106 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,106 Parameters:
2021-04-15 09:34:48,106  - learning_rate: "0.1"
2021-04-15 09:34:48,106  - mini_batch_size: "32"
2021-04-15 09:34:48,106  - patience: "3"
2021-04-15 09:34:48,106  - anneal_factor: "0.5"
2021-04-15 09:34:48,106  - max_epochs: "200"
2021-04-15 09:34:48,106  - shuffle: "True"
2021-04-15 09:34:48,106  - train_with_dev: "False"
2021-04-15 09:34:48,106  - batch_growth_annealing: "False"
2021-04-15 09:34:48,107 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,107 Model training base path: "/home/xxx/data/xxx-clinical-bert"
2021-04-15 09:34:48,107 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,107 Device: cuda:0
2021-04-15 09:34:48,107 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,107 Embeddings storage mode: gpu
2021-04-15 09:34:48,116 ----------------------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "train_medical_2.py", line 144, in <module>
    train_ner(d + '-base-ent',corpus_base)
  File "train_medical_2.py", line 136, in train_ner
    max_epochs=200)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/trainers/trainer.py", line 381, in train
    loss = self.model.forward_loss(batch_step)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 637, in forward_loss
    features = self.forward(data_points)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 642, in forward
    self.embeddings.embed(sentences)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py", line 81, in embed
    embedding.embed(sentences)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/base.py", line 60, in embed
    self._add_embeddings_internal(sentences)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py", line 923, in _add_embeddings_internal
    self._add_embeddings_to_sentence(sentence)
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py", line 999, in _add_embeddings_to_sentence
    truncation=True,
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2438, in encode_plus
    **kwargs,
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 472, in _encode_plus
    **kwargs,
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 379, in _batch_encode_plus
    pad_to_multiple_of=pad_to_multiple_of,
  File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 330, in set_truncation_and_padding
    self._tokenizer.enable_truncation(max_length, stride=stride, strategy=truncation_strategy.value)
OverflowError: int too big to convert

I have tried to change the embedding_storage_mode, hidden_size, and mini_batch_size. None of these gave me the fix to the issue.

Does anyone have the same issue? Is there any way to resolve this?

Thanks

1

There are 1 best solutions below

0
On

you can use the below parameters to limit the length of tokens

embedding = TransformerWordEmbeddings('emilyalsentzer/Bio_ClinicalBERT')
embedding.max_subtokens_sequence_length = 512
embedding.stride = 512//2