Example of DeepPavlov_Agent-based project is needed

Hi! Did anyone make any projects with DP_Agent? Can someone please leave an example here?
I tried to find examples, but found nothing, a project to explore would be great.

Hi Sergulli, and welcome aboard!

Excellent question. Have you read the docs?

The biggest (and most exciting!) test of the DP Agent-based socialbot has been recently done by the DeepPavlov’s very own Alexa Prize team. Our agent with more than 20 different goal-oriented, open-chat, and retrieval skills was up and running for US Alexa users between October 2019 and mid-May 2020.

Our team is looking forward towards making this implementation open-sourced in the coming months following the end of the Alexa Prize 2020 Challenge.

However, I encourage you to build the simplest version of your Agent by following these docs.

Feel free to ask questions in this topic. We’ll be happy to help!

Yes, I have read the docs, and they do not contain an example of Agent implementation.
Attributes like " Pipeline Config Description" are non-obvious, because parameters of one are not specified enough inside the docs.
There is no “example” inside those docs, there is an “abstract template”. And I’d really appreciate an “example” - implementation of that “abstract template”, ya feel me?
Thank you for understanding and your help in advance!

1 Like

Hi Sergulli!

Sorry for answering that late. You are right, there is a need to have a simple pipeline config for a primitive AI assistant built on top of the DeepPavlov Agent.

The good news are, while there’s a bigger work focused on publishing the one built on top of the Alexa Prize Challenge’s DREAM Socialbot, we’re also working now on a more primitive example. Stay tuned!

Hi Sergulli!

We’ve made a very primitive example of dialog system config for DeepPavlov Agent that you can find here:

Bear in mind that you need to understand the logic:

  1. Once an utterance comes to Agent, it can either go to a Skill Selector, or, if there is no Skill Selector, it will go to all defined skills
  2. Once they’ll give a reply, a Response Selector is used to automatically pick the best response. In this example, we use the default Response Selector (hence we use Python as connection method, and specify class name instead of HTTP endpoint) which uses confidence as a way to pick the best answer. In reality, you will need to implement your own Response Selector to better control the outcome
  3. You can use the config as a starting point and enhance it by adding annotators to the original utterance, as well as keep previous utterances in the history of the current dialog, so that skills could use this history to provide better answers

We’re still working on a broader story, but would love to help with this small demo, too!

Привет. Попробовал поставить пример. Вроде все встало и запустилось но докер на API не откликается.То что в доках описано выдает ошибку. Что с этим делать?