Named Entity demo - what version?

Hello,

Is it possible to know what exact model is this using? https://7002.deeppavlov.ai/model

I’m not getting the same responses when I use the latest version of ner_mult_long_demo from New classification models by Kolpnick · Pull Request #1657 · deeppavlov/DeepPavlov · GitHub

But even if I use ner_bert_base, I get the exact same results as ner_mult_long_demo. (still different from what I get when I use the demo in the website)

The demo on the website seems to be giving more accurate responses than my deployed API… so that’s why I’d like to replicate what you have deployed in your demo :slight_smile:

Thanks!


I added a few examples here: Named Entity demo - what version? · Issue #41 · deeppavlov/demo2 · GitHub

Hello @ghnp5! Thank you for your interest in DeepPavlov!

You can check out the exact configuration currently running in the demo here.

Hello,

Thank you very much for the reply!

I’ve just tested with that one, now.

However, it appears that the format of the JSON response is different from what https://7002.deeppavlov.ai/model returns.

And, besides that, while some of the responses matched a bit better (in terms of the tags/entities detected), others still don’t match what https://7002.deeppavlov.ai/model returns.

So, it doesn’t appear that the demo on the website is using ner_demo_mdeberta_address.

I’ve added the responses by ner_demo_mdeberta_address to my examples here: Named Entity demo - what version? · Issue #41 · deeppavlov/demo2 · GitHub

Thanks again.

I was able to replicate the demo behavior

To set up the environment, run the following commands:

pip install -q deeppavlov
python -m deeppavlov install ner_demo_mdeberta_address
pip install sentencepiece protobuf==3.20

The last line ensures compatibility with the mDeBERTa model.

I achieved the same output as in the demo with code below:

from deeppavlov import build_model

ner_model = build_model("ner_demo_mdeberta_address", download=True)

text = ["so I sent in my final text I tried to make a post on Twitter just for it to tag him when I was having a private text with @youtube officials"]
print(ner_model(text))

Hi. Thank you.

The last line ensures compatibility with the mDeBERTa model.

Yeah, that’s what I had done, otherwise the model would crash.

This is my docker-compose.yml:

  ner-demo:
    container_name: ner-demo
    image: deeppavlov/deeppavlov
    environment:
      - CONFIG=ner_demo_mdeberta_address
    ports:
      - "127.0.0.1:5000:5000"
    volumes:
      - ./deeppavlov/ner_demo_mdeberta_address.json:/usr/local/lib/python3.10/site-packages/deeppavlov/configs/classifiers/ner_demo_mdeberta_address.json
      - ./data:/root/.deeppavlov
      - ./venv:/venv
    entrypoint:
      - /bin/sh
      - -c
      - |
        /usr/local/bin/python3.10 -m pip install sentencepiece==0.2.0 protobuf==3.20
        python -m deeppavlov riseapi ner_demo_mdeberta_address -p 5000 -d

I put 0.2.0 because that’s what was in the requirements for that model. Without the version specified, it installs the same anyway.

Even if I make it the same as your commands:

    volumes:
      - ./deeppavlov/ner_demo_mdeberta_address.json:/usr/local/lib/python3.10/site-packages/deeppavlov/configs/classifiers/ner_demo_mdeberta_address.json
      - ./data:/root/.deeppavlov
      - ./venv:/venv
    entrypoint:
      - /bin/sh
      - -c
      - |
        python -m deeppavlov install ner_demo_mdeberta_address
        python -m pip install sentencepiece protobuf==3.20
        python -m deeppavlov riseapi ner_demo_mdeberta_address -p 5000 -d

I get the same results.

Could these warnings give a hint about why I get the differences?

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/usr/local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:454: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

(takes a long time, and then…)

Some weights of the model checkpoint at microsoft/mdeberta-v3-base were not used when initializing DebertaV2ForTokenClassification: [‘deberta.embeddings.word_embeddings._weight’, ‘lm_predictions.lm_head.LayerNorm.weight’, ‘lm_predictions.lm_head.LayerNorm.bias’, ‘mask_predictions.classifier.bias’, ‘mask_predictions.dense.bias’, ‘deberta.embeddings.position_embeddings.weight’, ‘mask_predictions.dense.weight’, ‘lm_predictions.lm_head.bias’, ‘mask_predictions.LayerNorm.weight’, ‘lm_predictions.lm_head.dense.weight’, ‘mask_predictions.LayerNorm.bias’, ‘mask_predictions.classifier.weight’, ‘deberta.embeddings.position_embeddings._weight’, ‘lm_predictions.lm_head.dense.bias’]
- This IS expected if you are initializing DebertaV2ForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaV2ForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DebertaV2ForTokenClassification were not initialized from the model checkpoint at microsoft/mdeberta-v3-base and are newly initialized: [‘classifier.bias’, ‘classifier.weight’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Many thanks!

Another reason for me to believe there are differences between what’s in the new classifiers branch/PR and what’s in the Demo is because the output format is different.

When I use the JSON in the branch/PR, I get:

[[["Test"]],[["O"]]]

But the Demo returns:

[[["Test"],["O"]]]

I believe it might have to do with this:

"out": ["x_tokens", "y_pred"]

which in ner_mult_long_demo.json is:

"out": ["output"] (as the output from sentence_concatenator)

Is it possible to share the actual model JSON that runs in the Demo, please?

Thank you very much!