How to evaluate model?

Hi!

I created several FAQ models following instructions at: http://docs.deeppavlov.ai/en/master/features/skills/faq.html. I can interact with these models using ‘interact’ command, but I am getting nothing, if I apply ‘evaluate’ command.

You mean when you run something like?

python3 -m deeppavlov evaluate tfidf_logreg_en_faq

(according to GitHub - deepmipt/DeepPavlov: An open source library for deep learning end-to-end dialog systems and chatbots.)

I also receive no output, except the various INFOs and WARNINGs (re fitting from scratch).

I am also curious to know how this works :slight_smile:

Hi @Inguna, @michaelwechner,
Sorry for the late reply, we have long holidays :slightly_smiling_face:
For FAQ models configurations there are no validation and test data for evaluation and no metrics configured.
faq_reader doesn’t even support reading from multiple data sources, it always reads from one files and uses the data as training data.
You can edit a configuration file so that the train block looks something like:

{
  "metrics": [
    {
      "name": "accuracy",
      "inputs": ["y", "y_pred_answers"]
    }
  ],
  "evaluation_targets": [
    "train"
  ],
  "class_name": "fit_trainer"
}

Evaluation will then show accuracy on the train data.

Hi @yoptar

Thank you very much for your feedback!

Thank you! Perhaps you need to explain it in ‘quick start’ section, where typical actions are listed.

Thank you,

Inguna