As a company grows, more and more information is generated. This information comprises HR policies, administrative and technical processes, company benefits and IT know-how. As the resources within the company increase, the need to place them in an easily-accessible repository becomes critical.
In response to this issue, a few of us at Gorilla Labs decided to create a Slack-integrated chatbot. The chatbot was implemented using Amazon Web Services Lex, and the final solution only incorporated AWS and Slack-provided services. Our use of servers was limited, as we wanted to keep the cost of the project to a minimum. We decided to name our assistant chatbot “Winston.”
While Amazon Lex provides both speech recognition and natural language understanding, our scope for this project was to create a text-based chatbot using only Lex’s natural language understanding. This functionality makes Winston a useful Slack-integrated application without limiting its ability to one day be integrated it into a speech interface such as Amazon Alexa.
Introducing Amazon Lex
“Lex” is a key technological component of our project. In April 2017, Amazon Lex was introduced to the developer community; it provides the advanced deep learning functionalities for automatic speech recognition (ASR) to convert speech into text, and natural language understanding (NLU) to recognize the intent of the text. This allows developers to create conversational interactions by using the same deep learning technologies that power Amazon Alexa.
Amazon Lex Jargon
Intent: Think of this as a conversation; it represents the goal of the chatbot user. In our case, most intents are simple questions with direct answers.
Utterances: Speech or text phrases that trigger the intent. In the case of Winston, text phrases are used to recognize the user’s intention and trigger the proper intent.
Slots: This represents a piece of data needed for the chatbot to fulfill the user’s intent. Think of it as required user input. The slot types are the valid values a user can respond with, which can be either custom-defined or one of Amazon’s pre-built types.
Prompt: This is a question the chatbot will ask the user when requesting user input to match a given slot. When the bot can not match a slot from the triggered utterance, it will use the associated prompt to ask for input from the user.
Fulfillment: When the chatbot has all the slot values (if any), then it proceeds with the logic in the fulfillment section. This is where an AWS lambda function can be used if you need some business logic. If that is not the case, then a simple text response will suffice.
The following steps describe the overall roadmap for the creation of our HR virtual assistant:
- A new chatbot is created on the Amazon Lex console.
- The conversations are designed based on requirements determined by our administrative staff.
- AWS lambda functions are created when needed (e.g. when interacting with the HR system’s API).
- A new app is created in the Slack workspace.
- The Lex chatbot is then integrated with the Slack workspace by associating the Slack app with the bot as described here.
- The chatbot is constantly tested and tuned according to its interaction with users.
How does it work?
This diagram represents the high-level architecture of the solution:
BambooHR is our human resources software. When a Gorilla Logic employee makes a request for information — for example, how many days off they have accumulated — we instruct Amazon Lex to fulfill the intent using an AWS lambda function that, in turn, will interact with the BambooHR API. This approach will be similar with other third party APIs that we may eventually include, but are currently outside the scope of this project. We will discuss the possibility of adding other APIs more thoroughly in a future blog post.
Let’s illustrate the basic usage with a simple “Hey Winston” message:
First, the user sends a direct message to the Slack app. This app then sends the text to the Lex chatbot. The bot proceeds to look for a match; if it can’t find a match, it will respond with the clarification prompt configured on the Lex dashboard:
The Lex chatbot will try to match the input with one of the provided utterances for the defined intents:
In this particular case, our input will exactly match the “Hey Winston” utterance, so it will trigger the “DirectWinston” intent and immediately respond with the defined answer:
The above image shows how we configured Winston to respond with two lines. It will randomly select a greeting and then a question from those provided in each list.
This illustrates the flow of this particular interaction:
Amazon Lex’s Current Limitations
Despite the fact that it is amazingly easy to create a basic chatbot using Amazon Lex, there are some limitations on what it can currently do.
It would be very useful to have a context for the bot that were global for all its intents. In our particular case, it would be great if we could define a “global synonym” so the bot could identify contextual elements such as the company name:
- What are the company benefits?
If we could define a global synonym for “the company” in order to associate this idea with “Gorilla Logic” or “GL,” it could immediately match phrases like:
- What are the benefits of working for GL?
- Please tell me the benefits of working for Gorilla Logic.
We could also match completely different intents like:
- How does the parking lot work at the Gorilla Logic office?
- I would like to know what the parking lot policies are at GL.
Another limitation is the lack of support for different languages. In our particular case, it would be very useful if Amazon Lex supported Latin American Spanish. To really be a global tool, the support of different written and speaking languages is a must.
Amazon Lex is a really powerful platform for creating pseudo-intelligent chatbots, a fact we hope to have demonstrated by creating our simple FAQ chatbot, Winston. But the technology that Amazon Lex provides goes beyond simple FAQ applications; it could be used to interact with 3rd party services to fulfill more complex user requests such as, “How many vacation days do I have?” We will address how to add more complex capabilities in a future blog post; stay tuned!
References and Resources: