Training multilingual NER model

Hello,
I’m trying to train multilingual NER model with own data. I took the data for ner_ontonotes_bert_mult and added some own data (english texts). When I trained the model and tested it on Czech texts, it failed. It was much worse than the pretrained ner_ontonotes_bert_mult. I’m not really sure how you trained the model. The dataset downloaded for ner_ontonotes_bert_mult seems to contain only English texts. But the documentation says that the model was trained on Ontonotes data and evaluated on Russian data. Does it mean that when you train the model, the training data was the Ontonotes data (only English texts) and validation data was the Russian texts? Or what data was in the training and validation set please?

I see parameter freeze_embeddings in BertSequenceTagger. Should I set this parameter to True if I don’t want to retrain BERT? E.g. if my training data is only English and I want to keep the model multilingual.

Thank you
Lubos

1 Like

Hi!

ner_ontonotes_bert_mult was trained on English-only texts from Ontonotes dataset. During training validation set was also from Ontonotes. Performance of zero-shot transfer to Russian was measured after full training on Ontonotes.

freeze_embeddings parameter could be used to not to train embeddings matrix in BERT model (all other parameters of BERT would be trained). It might help for zero-shot transfer from one language to another.

Some questions that might help to solve your problem:
Could you provide more details on how you define the failure?
Did you train the model from scratch? Did you use only on your data or add your data to Ontonotes dataset?
Could you re-train ner_ontonotes_bert_mult from scratch to check if it matches performance that is reported on NER doc page to eliminate reproducibility problem?