Question about build_model

when i execute ner_modelformulti = build_model(‘ner_ontonotes_bert’, download=True, install=True),
I got the error “AttributeError: ‘BertForTokenClassification’ object has no attribute ‘module’” with transformers version == 4.13.0 and torch version == 1.13.0.


微信截图_20230712201629

Hi, @Crabbit-F!
Thank you for informing us about this issue. The fix is already in PR, we plan to add it to the master branch soon.

1 Like

Thanks a lots! And,hao can i use now? :grin:

You can either wait for the release of a new version, or you can replace the file path_to_installed_library/deeppavlov/models/torch_bert/torch_transformers_classifier.py from the library with the file from the link.

1 Like

thanks a lot!let me have a try. :+1: :+1: :+1:

I replace the file but it don’t work :smiling_face_with_tear:

I apologize for the delay in responding. Checked the proposed solution again, it really doesn’t work :thinking:. Therefore, now I advise you to either use one GPU (CUDA_VISIBLE_DEVICE=0), or you can change the file again, but this time replace the file path_to_installed_library/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py with the file from the link. Hope this will help you now :slight_smile:

emmmmmmm.I’m very sorry that it took so long to remember to continue this work. Thanks for your help. But the link seems to be broken now. :worried:And there is another problem, I don’t know how to load the model downloaded from ‘https://files.deeppavlov.ai/v1/ner/ner_ontonotes_bert_mult_torch_crf.tar.gz’ correctly, the program still accesses hugging face automatically. In downloaded files, I can’t find config.

We recently released a new version of the library where we fixed this bug. Now there is no need to change the script file.

Let me try. Thanks a lot! :grin: :grin: :grin:



Thanks for your help. But, Is it unreasonable for all words to have ner tags? Is there something wrong with my calling method?

Try to set the parameter download to True: ner_modelformulti = build_model('ner_ontonotes_bert_mult', download=True). Thus, the pre-trained model will be downloaded from our servers.

The first time I executed the code I used “ner_modelformulti = build_model(‘ner_ontonotes_bert_mult’, install=True, download=True)”. The pre-trained model had downloaded, but if arg ‘install’ and ‘download’ is still included during the second execution, the download will be repeated. So for subsequent use I removed these two parameters.

I agree with you, when you run build_model again, you can skip install (installs model requirements), but I still recommend setting download=True. The pre-trained model will be downloaded once again only if it has been modified locally. If you download the pre-trained model from the server and don’t retrain it on other data, then just run:

from deeppavlov import build_model

ner_modelformulti = build_model('ner_ontonotes_bert_mult', download=True)
print(ner_modelformulti(['Your example 1', 'Your example 2']))

Thank you very much! I found the question in config and use new version deeppavlov. It is work! :grin: