Working with squad_noans

When I’m using the demo Text QA at https://demo.deeppavlov.ai/#/en/textqa, and asking a question which doesn’t have a solution in the context, it gives me an answer of “I don’t know”, however when I’m implementing the same (like https://colab.research.google.com/drive/1BdpV9jkODPND3vibi6UFychO--SaGk8S?usp=sharing) in a Python Notebook, it is giving me answers which should not be given.

Hi, @anirbans403!

The demo Text QA uses squad_bert_infer configuration file. This model returns an empty string when can’t answer the question and demo frontend replaces empty strings with I don't know. Replacing configs.squad.squad with configs.squad.squad_bert_infer and than adding replacing empty string with I don't know you get the same result as with the demo. I also advise you to pay attention to the third argument that the model returns - logit. Logit is directly proportional to the model’s confidence in the answer.

Hi @Ignatov, thanks for the solution!
I worked with the squad_bert_infer configuration file. The Logit score like you mentioned is really low (a satisfiable answer to me had a score of 2110620.25 and the answer which I want the model to not print has 0.14358282089233398), however, it did print an answer. 1) How do I infer this logit score?
2) What is a good logit score to be considered to be a threshold?
3) How do I add the "replace empty string with I don't know?
4) Is there a way to make sure, the model itself rejects answers with such low logit scores?

For reference please see the Colab notebook here

I’ve given you editing privileges, feel free to make changes if you want to! Thanks in advance, again!