Integrate a custom model to deeppavlov for inference

Hello, I hope you’re having a great day !

I’m not sure I’m in the right part of the forum, so don’t hesitate to redirect me.
I have some questions about DeepPavlov and I couldn’t find answers inside the documentation (maybe I missed it, I only found : Register your model — DeepPavlov 1.1.0 documentation for explanation on how to register a model) :
I wish to integrate a custom already trained model inside a pipeline, only for inference. It’s a model conducting intent and entity recognition. It’s saved as a pytorch model (files for config, model and training args). Is there any way to do that ? If so, where can I find the corresponding documentation with examples or tips on how to achieve that ?

Thanks for any help you can provide :slight_smile:

Hey @Alana , Thank you very much for your interest!

It’s possible to integrate pre-trained models into DeepPavlov, however, it depends on the model itself.

Please let us know if the model is based on transformer by hugging-face? If it’s already on transformer’s hub? If not, are you able to upload it on transformer’s hub?

Thank you @Vasily for your answer !

No, it’s not on Hugging-Face hub but it’s based on a Hugging-Face Transformer. Classifications layers were built on top of it to make the joint detection of intent and entities.
I can’t upload it to the hub. Is there a way to load it from pytorch model source files ?

Thanks again

Then, it might be a problem. First, we have to implement the configuration of the classification layers in DeepPavlov. Can you provide us with the exact configuration of the layers? Potentially we could implement it and add it to our next release, however, it might take some time.

How do you intend to use your model within DeepPavlov? Only for inference via rise API?

Thank you!

It’s a model with a joint BERT architecture (GitHub - monologg/JointBERT: Pytorch implementation of JointBERT: "BERT for Joint Intent Classification and Slot Filling", see [1902.10909] BERT for Joint Intent Classification and Slot Filling for paper). So basically, it use a Transformer and exploit the output for CLS token to detect intent through a softmax/linear layer. For entities/slots identification, it’s the output for all others tokens which are passed through a similar layer.

I never used DeepPavlov before, but indeed, the main idea is to use it for the inference API. Is there any way to make DeepPavlov call a custom script for inference when a custom model is present in the pipeline ?

Thanks again :slight_smile:

I don’t believe there is an easy way to integrate JointBERT into DeepPavlov.

We will consider adding JointBERT into our next release but it will take some time. Meanwhile I can offer separate models for intent classification and slot filling based on any Transformer-based encoder.

Thanks

Thanks again for your answer ! That would be an awesome feature.

I the meantime, is there any documentation/examples somewhere on how to integrate a custom model for inference ? For example, let’s suggest I make two scripts using the same model : one for intent detection, the other for entities. How can I integrate them inside a deeppavlov pipeline for inference ? Is there an example somewhere on how to proceed ?

Many thanks

Hi !

Just asking again : Is there some documentation / examples on how to integrate custom scripts/models for inference into DeepPavlov ?

Thanks !