Skip to content

Rasa is an open source machine learning framework for building AI text- and voice-based assistants. Rasa provides a set of tools to build a complete chatbot at your local desktop and completely free.

Why rasa?

Rasa is a contextual AI assistant that can capture the context of what the user is talking about, capable of understanding and responding to different and unexpected inputs and also gracefully handle unexpected dialogue turns.

Rasa consists of two main components:

  *Rasa NLU

  *Rasa Core

Rasa NLU:

Rasa NLU is something like an ear to your assistant which enables the assistant to understand what has been said by the user.

It takes input from the user which is unstructured and extracts structured data in the form of intents and entities (LABELS).

Intent:

An intent represents the purpose of a user’s input. You define an intent for each type of user request you want your application to support.

·   

Example:

·   Intent: searching restaurants

·  What are the non veg restaurants present in Hyderabad?    

           I am looking for veg restaurants in pune?

·  Are there any vegetarian restaurants in Chennai?

 The above questions come under the intent: “searching restaurants” if a user asks any similar kind of questions the assistant will classify the intent as “searching restaurants”. More the data better the bot would get trained.

 Entities:

     This process of extracting the different required pieces of information from a user text is called entity recognition.

 From the above example “Are there any vegetarian restaurants in Chennai?” the entities extracted would be

  Chennai=location, Restaurants=facility type.

    By using this Intent and entities assistant can understand what the user is talking about.

 Rasa NLU file:

Rasa Core:

  Rasa Core is also called Dialogue management. It is something like the brain of the system.

           Rasa, instead of creating rules, uses machine learning to learn conversational patterns from the example conversational data and predicts how an assistant should respond based on the context, history of conversation and other details.

·        The train data in dialogue management is called stories.

·        The story starts with a double hashtag(##) which marks the name of the story

·        Messages sent by the user are shown as lines starting with the asterisk symbol

The responses of the assistant are expressed as action names. There are two types of actions in rasa “utterances” and “custom actions”

Utterances actions are hardcore messages that a bot can respond with. Custom actions, on the other hand, involve custom code being executed.

The custom code can be anything, some kind of back end integration like making an API call or connecting to the database and extracting required information.

       All actions (both utterance actions and custom actions) executed by the   assistant are shown as lines starting with EN Dash (-) followed by the name of the action

Stories file:  

Domain:

·  A domain is a very important part of building a dialogue management model.

·  Domain file need to contain all the intents, entities, and actions that are mentioned in the NLU and stories files.

·  Domain file also contains “responses

Responses:

  ·  This is where you can define the actual response an assistant will use to respond when specific utterances are predicted.

·   Each utterance can have more than one template and can include things like images etc.

· The custom action code should be written in an action.py file.

Rasa workflow:

 Choosing a Pipeline:

             To enable our assistant to understand the intents and extract the entities which we defined in our NLU file we have to build a model that is done by processing a pipeline.

There are two pre-configured pipelines in rasa:

·        Pre-trained embeddings spacy

·        supervised embeddings

pre-trained embeddings spacy:

    ·  It can Perform well with less amount of training data

·           Not available for all the languages.

·           If the chatbot is related to domain-specific then pre-trained embeddings     spacy pipeline is not a good choice.

Supervised embeddings:

    ·          The models will pick up domain specific vocabulary.

     ·          It can build assistants in any language.

·           It has the advantage of handling messages with multiple intents.

·           Needs more training data.

Training the Model:

As our NLU, stories, domain files, and pipeline are ready, we are good to go and train our model by running the scripts in the terminal. Once the training is done. The model will be saved in the models folder.

After training is done, chat with the assistant, check whether it’s correctly predicting the entities and intents from the user input and take different dialogue turns to see whether it can handle or not if it is not able to handle you need to re-train the model by making necessary changes in the required files.

There are some more important things such as slots, trackers, rasa interactive, rasa x, fallback actions, etc.  This will be covered in the next part of the article.