Inference with fine-tuned classification model on my data

Hello! I have fine-tuned multilingual BERT on my Russian texts for classification and saved the model and now I have: checkpoint,, model.meta, valid_log, classes.dict, model.index,train_log.

But I don`t know what should I change inside my config file for model in order to use my fine-tuned model. I tried:

bert_config = read_json('my_config.json')
bert_config['chainer']['pipe'][3]['load_path'] = './finetuned_model/model'
bert_config['metadata']['variables']['MODEL_PATH'] = './finetuned_model'

But the error “attempt to get argmax of an empty sequence” keeps popping up.

How do I use my trained weights in a config file to make inference?


Firstly, make sure that your model is saved to ./finetuned_model not to ~/.deeppavlov/models/classifiers/finetuned_model folder?

You don’t need to change this path manually because in the config it should look like "load_path": "{MODEL_PATH}/model". So, if you change MODEL_PATH in the config, you will automatically change load_path.

Я перенес модель из ~/.deeppavlov/models/classifiers/finetuned_model в рабочую директорию. Что бы я ни менял в конфиге, модель все равно использует веса из multilingual_bert: INFO:tensorflow:Restoring parameters from /root/.deeppavlov/downloads/bert_models/multi_cased_L-12_H-768_A-12/bert_model.ckpt

За основу я брал rusentiment.json. Может быть Вы можете прислать на примере этого config-а, где именно нужно прописать путь к дообученной модели, чтобы делать инференс?