Blog | 17 Aug 2021

Why you need good ‘intents’ for an effective chatbot

In this blog post Vincent Lenngren, an AI trainer, explains how a chatbot works in relation to its intentions or ‘intents’. It is these intents that enable the chatbot to understand and move the dialogue forward. Vincent’s simple explanation model is based on each ‘intent’ being its own separate box that contains one answer that corresponds to the user’s question or statement.

A chatbot works with intentions or ‘intents’. You can imagine each ‘intent’ as a separate box containing an answer, which will move the conversation forward.

Every box is filled with sentences that contain the meaning of the intent, but expressed in different ways. This ensures that you end up in the right box. Some chatbox platforms put ‘shoe boxes’ into these boxes and then stack them one on top of one another to create a hierarchy.

No matter which platform you use, the challenge is to name these intents and arrange them in a good way. The boxes must be able to be separated somehow. In the systems I have worked with, we have used probability to match the words a user types in, so that we can predict which box will give the best answer.

Let me give you an example:

The sentence “How long is Silverstone” contains four words. If these words are in the sentences that make up the address of the ‘Silverstone track length’ box, it will receive a higher percentage than the ‘Hockenheim track length’ box. Three of the words are in both boxes, but it is the word ‘Silverstone’ that separates them.

I see examples on a daily basis of users and stakeholders who do not train bots and who do not understand how these principles work. If the user writes a sentence like ‘erm, moped’, the system will not know which of the moped boxes the user wants to use. It will search for the word ‘moped’ and the word ‘erm’. The user will then end up in the box that has the highest probability.

Although a person might know from experience how to decode this to “Dad, I want a moped”, this is something a bot cannot do.

“When I was a kid, I had a Zündapp, and now my boy wants to have an EU moped” contains a lot of irrelevant information. Does the user want to know something about Zündapp? Or do they want to know, for example, the prices of EU mopeds? I hope this gives you a slightly more realistic expectation of a bot.

When you decide to start a chatbot project, you will have expectations and a lot of ambitions. For example, you might want 70% of all dialogues to give a correct answer. Another ambition may be to continuously expand the chatbot’s ability to answer more intents. It will not take you long to realise that you need your own staff, who understand your operations in detail and how your users express themselves.

A bot cannot come close to the level of complexity that a human being can deal with over the phone. But what it can do is to take the pressure off answering the most common, repetitive questions. If users want to use double negatives and add unnecessary information, this will lead to frustration and phone queues. If users are prepared to divide their specific question into general open questions, they will get smart answers that will get them a long way.

My conclusion is that you need training and experience to design a chatbot that understands all the users’ intents. It also requires a lot of commitment from the operations to help in the work to produce and analyse all the data. We have several AI trainers who are used to doing this work and we will work with you, the customer, to make sure your chatbot is of the highest quality and delivers value to your users in a quick and friendly way.

Blog post written by Vincent Lenngren for Softronic AB.