Bert model not able to resolve the updated information

Context: The CEO of Xylo in 2016 was Kiran. After the acquisition, the new CEO is Satya.

Question: Who is the CEO of Xylo?

Expected Answer: Satya

Bert_Predicted Answer: Kiran

Hi Team,

Do you know why Bert is not able to address such questions? Any advice to solve this? Thanks much!


In DeepPavlov, the BERT model for Question Answering is trained on SQuAD v1.1 dataset. Performance of the model is about 88 F-1, so it is not perfect and it can make mistakes.

SQuAD v1.1 dataset has been criticised for a high level of word overlapping in a question and a context. In some way it might cause problem in your example. Some other problems of this dataset are covered in this paper:

You can try to train model on SQuAD 2.0 dataset (which includes more sophisticated examples) and on adversarial examples from paper mentioned earlier, and/or train BERT-large.

Also, you might look into multi-hop reasoning dataset for question answering like HotPotQA:

1 Like