Our journey together, so far, has us clearly identifying our business problem we are trying to solve, and meticulously defining our use case(s) against the value proposition we have defined. We have also validated that a conversational user interface (CUI) is (or is not) the right choice for the implementation. Full speed ahead, right? Well, let’s pump the brakes for just a moment and examine the form, fit, and function of a chatbot against the type and size of organization you are planning to implement in.
I think most of you would agree that large Fortune 500 enterprises should not run their businesses on Access databases or ‘that spreadsheet on Bill’s laptop’. Likewise, you probably wouldn’t recommend your child’s neighborhood lemonade stand to purchase SAP to manage their supply chain of lemons! These are clear (and purposefully ridiculous) definitions of a maturity mismatch. The size, complexity, and need of the organization should define the parameters of complexity of the solution to meet those needs.
Conversational AI, and by extension, the use of a conversational user interface, is no different. It is broad in terms of capability and can service a wide range of organizations, which is both a blessing and a curse. Without careful consideration, you could end up with a subpar product, or an unwieldy implementation that is “too much bot for your biz”.
This blog installment is about getting to the crux of product selection and how important it is to buy the right amount of conversational AI functionality for your needs. If you have done any shopping for these types of platforms, you may have noticed that they are usually separated into tiers with associated functionality and cost. I have done quite a bit of competitive window shopping myself, and have found that the functionality described in those tiers are not always clear or easily translatable to a set of features that you can align with your organization’s needs. In many cases, those tiers may be additionally defined by parameters like volume consumption (e.g. messages per month), which can also be unintuitive. Many of these features are going to be binary choices and might force you into a particular tier. Here are a few common ones:
- Integration with specific platforms
- Cloud vs. On-premise
- Security compliance
Once the features are considered, you should move to functions which are not as clear-cut. Functions are what the bot is actually capable of and how advanced it is. For example, say your business problem is to provide an interactive form of Q&A on your company website. The derived use case is a chatbot icon always present in the bottom corner of the webpage that pulls from a vast compilation of FAQs in a knowledge base. In this case, the simplest of chatbots can be employed to serve that need, and almost any platform out there is capable of that functionality.
The business problem was real. The use case sounded good. But, the result was poor. Why?
Let’s take it one step further. What if the answers are not linear as in the FAQ, but requires the bot asking questions to get to that answer? Now the bot needs to provide a guided Q&A, which is more advanced than the previous example and would require a more mature chatbot technology that has a conversational design element and some workflow capability. Finally, what if the result of that guided experience is to schedule an appointment or create a service ticket on the user’s behalf? You can see that the need for a more complex conversational AI platform can grow very quickly, but is your staff ready to handle the result of that complexity?
I worked with a customer some time back who employed a chatbot for lead generation on their external website. The thought was that the bot would take the place of the common contact form that is prevalent on most commercial sites. The use case had the bot creating leads in the customer’s CRM system for the sales executives to follow up on. Sounds like a great implementation, right? I recommended against the company moving forward with this at that time, because upon closer examination the level of maturity of the website the bot was deployed on, was grossly misaligned with the capability and data quality of the CRM system. Beyond that, they did not have the sales staff to follow up on the poor-quality leads being generated. Long story short, they went with a different vendor and did it anyway. I followed up with them a year or so later, and they were very unhappy with the results. They reported that many potential customers were not being contacted even after a month of their site visit! This was a classic case of maturity mismatch. The business problem was real. The use case sounded good, but the result was poor– not because of the technology they chose, but purely because of the business characteristics of their organization and the technology they already had in place.
So the burning question is “How do we avoid the same fate?” An obvious answer, and a semi-shameless plug, would be to choose the right vendor! You should be looking for one that is more interested in the outcome of the implementation rather than force-fitting their product into your organization. Beware of the statement: “Well, we haven’t seen that use case before, but I am sure we can make it work”. Beyond that, there are several things you can do before even selecting a vendor:
- Be willing to concede that you don’t need certain functionality or features if the use case doesn’t explicitly warrant it.
- Understand your future-proofing strategy. I often justify too large of a purchase under the guise that I will use the functionality someday, and that someday never comes. Case in point is my insanely expensive commercial-grade drone, but that is a conversation for another day!
- Don’t sacrifice solid base performance for fringe feature sets as it is not good to have a chatbot technology that will interface with an IoT device if the NLP is junk and it can’t recognize intent. You can tell the maturity of a vendor and their product by where they invest their technology dollars. If they have a solid high-performing bot, that generally means they have many (potentially large) customers using it who demand that level of consistency. The ones with lots of fringe features are probably looking to try to differentiate themselves with thin layers of ‘glitter’.
I do want to take a moment to point out some emerging technology functions that are quite impactful to most implementations regardless of the use case. Some of these you may be aware of, and some you may not:
- Self-learning– This refers to the ability of a chatbot to generate or associate ‘utterance examples’ to an intent without an administrator having to add them. This allows for continual improvement of the bot to recognize what the user is trying to say.
- Intent discovery– This is the ability for the chatbot to continuously monitor a body of information (think knowledge bases, FAQs, and websites, etc.) to unearth new information that users might ask about and derive the intent definition and seed utterances to service those questions. This is incredibly useful and tremendously cuts down on maintenance or your bot becoming ‘stale’ of information present in the source.
- Episodic memory for contextual conversation– This refers to the chatbots’ ability to have a continual conversation about a subject matter with context, and to remember previous interactions with the same user. If you are testing out a chatbot, and it feels like a sequel to 50 First Dates, then it does not have this functionality!
I know this is a particularly long installment already, but I would like to cover one more subject that may seem unrelated, but in reality, is quite relevant to the topic of maturity—Ethical AI. I had the distinct privilege of serving, by invitation only, on a vendor steering committee for Microsoft partners leading the AI segment. The goal was to establish guidelines for partners to follow ethical AI practices for their customers. We shared many real-world use cases that were questionable as to whether they could be ethically implemented. I am not referring to anything particularly nefarious, but more along the lines of should a chatbot really be handling this task? Or, can it do the task securely without privacy concerns? Ironically, most of the time the answer was simply, no, this is not a good use case for a chatbot. One example that stood out to me was a 24/7 anonymous chatbot for suicidal or abused teens. The intricacies and delicacy of dealing with that type of situation and knowing the impact that that interaction with AI can potentially have with the user is a crucial thing. I suppose the takeaway here is that no matter how mature your organization might be, the technology for an ethically responsible use case might not be that mature, and to be completely honest, might never be that mature.
I sincerely hope there were some useful tidbits of information for you here. Stay tuned for the next installment where we will discuss the artificial intelligence DNA of conversational AI and how it fits into an intelligent workplace.