After training a distilbert-base-uncased
with spacy[transformers]
on a custom NER task, I would like to use the model in the transformers pipeline
.
After training, for example, no config.json
file is stored.
After training a distilbert-base-uncased
with spacy[transformers]
on a custom NER task, I would like to use the model in the transformers pipeline
.
After training, for example, no config.json
file is stored.
Copyright © 2021 Jogjafile Inc.
From what I'm hearing, you fine-tuned a transformer on a specific task using spaCy 3.0+ and would like to use it in the HuggingFace transformers pipeline ?
I crafted the best solution I could. Mind you, my solution isn't perfect. The spaCy model doesn't have a position embedding tensor that the Hugging Face model expects. And the Hugging Face model has a pooler layer that the spaCy model does not have. To fix this, I had to exclude the pooler layer and craftily add a position embedding tensor into the hf output. As a result, c/s scores will be lower. And not to mention the headache of converting the tokenizer.