Sponsored By

AI at EC19: Sharing Tips & Tricks from the ExpertsAI at EC19: Sharing Tips & Tricks from the Experts

Artificial intelligence is keeping our industry relevant and challenging, and it will engage everyone.

Brent Kelly

March 26, 2019

12 Min Read
No Jitter logo in a gray background | No Jitter

As mentioned by others here on No Jitter, artificial intelligence (AI) was the hot topic at Enterprise Connect 2019. From Microsoft Graph to Amazon Alexa for Business to Google Dialogflow to Cisco Cognitive Collaboration, AI-infused products were promoted for their ability to improve our business collaboration and business operations. AI was so popular that even random drivers in Kissimmee, Fla., got into the act by promoting their favorite AI solutions with vehicle wraps, as shown below.

CognitiveCollab.png

A car on International Drive in Kissimmee, Fla., sporting a wrap promoting cognitive AI

Nowhere was the topic of AI more prominent than in some of the Contact Center & Customer Experience sessions. I attended as many AI-related breakout sessions as possible and had numerous discussions about the key ideas that emerged from them. Here are some of the insights I learned throughout the week.

 

The Transformative Impacts of Artificial Intelligence on UC

In this tutorial, industry analyst Kevin Kieller, of enableUC, discussed AI technologies and how they are intersecting with and enhancing unified communications. I took away these key points:

  1. The importance of gathering and storing data. Data drives AI, which means you should begin capturing as much data as possible. You never know when it will be useful as a factor in a future AI project.

  2. Make sure the data you capture is normalized, or at least labeled. If you capture numbers, make sure the data reflects what the numbers represent, such as dollars or euros. Make sure the data is labeled insofar as possible.

  3. Don't rush down the path to AI if you don't need it. Applying AI without delivering results isn’t useful.

  4. AI is still in its infancy. Some of the interesting things in UC right now involve intelligent or automatic meetings joins, video background blur, counting participants in a meeting, note taking, automated action items, documenting decisions, and adjusting infrastructure in real time.

  5. In a multiparty meeting in which automated speech transcription is in use, speech-to-text works better if people join the meeting from home or a location outside of the conference room. If everyone is in the same room, distinguishing the different speakers is often difficult for the speech-to-text algorithm.

  6. AI comes with some legal implications and risks -- Kevin mentioned that 55 companies noted AI as a risk factor in their 2018 annual reports. One of the key risks is that AI algorithms may be flawed or biased in some way, meaning that they treat people differently in commercial situations that demand equal treatment under the law.

 

Legal Notification Required for Transcription?

Following Kevin’s session, I had an opportunity to speak with Martha Buyer, an attorney specializing in telecommunications law, in the speaker-ready room. Because speech-to-text and transcription are so important to communications generally, I pitched the following question to Martha, “Do we need to inform another party in a voice or video call if we are transcribing the audio, as opposed to just recording it?” Meaning, if we just transcribe the call but don’t record the audio or video, do we need to let the other person or people in the meeting know?

Natural language processing (NLP) tools work with text, and the implication is that as these AI tools become part of our normal toolkit, we’re likely to keep them “always on” in any type of meeting we participate in. Do we need to tell someone that the meeting is being transcribed even if is the audio isn’t being recorded? And, if we’re using a cloud-based service located in another state, does that impact our legal obligations?

Martha quickly found the legal code and responded as follows:

This is a great question and it’s been interesting finding the answer. Here is the federal statutory language... The magic word is intercept. It’s not even the “holding” of the information but rather its interception.

The state rules may be more specific, but I don’t think that you need to go much further than this to see that it’s the participation of a third party, regardless of how passive it is, that triggers the potential problem.

“It shall not be unlawful under this chapter for a person not acting under color of law to intercept a wire, oral, or electronic communication where such person is a party to the communication or where one of the parties to the communication has given prior consent to such interception unless such communication is intercepted to commit any criminal or tortious act in violation of the Constitution or laws of the U.S. or of any State.”

18 USC § 2511(2)(d)

Martha and I discussed these paragraphs and reached some conclusions. You can draw your own conclusions, or better yet, contact Martha directly for a great legal answer. The implication is that we may need to exercise some care as we use cloud-based transcription. The legal requirements may even vary from state to state. For example, in Utah, where I live, only one party needs to be aware that a voice communication is being recorded, but if I record it using a service that is in another state or the voice stream crosses state lines, does that make it so that both parties need notification? What are the rules for conversations that are automatically transcribed by an AI machine but that aren’t recorded? These are questions that will undoubtably come before a court somewhere. Per Kevin’s comment above about legal risks, AI and cloud-based services open new risk vectors that need exploring.

 

Click below to continue to next page: Automation Vs. AI-Assisted Humans, and more

Automation Vs. AI-Assisted Humans: Where to Draw the Line

This session, part of the one-day Communications & Collaboration 2022 program, was moderated by contact center experts Sheila McGee-Smith, of McGee-Smith Analytics, and Nicolas de Kouchkovsky, of CaCube Consulting, with panelists Patrick Nguyen, of [24]7.ai; Abinash Tripathy, of Helpshift; Adi Toppol, of Jacada; and Lonnie Johnston, of NICE. Here are the AI-centric takeaways from this session:

  1. What AI technologies are ready for prime time? Which AI technologies to try or use depends on your organization’s maturity. If you’re already very digital and work with APIs, then a starting point could be bots, and using them should be relatively straightforward for you. That said, when you do use bots, think of them as interfaces to self-service applications. For those organizations that don’t yet have good API programming skills, robotic process automation (RPA) will be easier when used in conjunction with a contact center to provide some framework. Regardless, be selective with what you pilot. Carefully consider which AI technologies make sense for your company, given where it is.

  2. What’s left for the humans? Is there a long-term role for people? Humans will fill in the gaps AI leaves, and there will always be complex situations requiring the human touch. The 80-20 rule applies here: According to one of the panelists, 80% of contact center work can be automated, but the remaining 20% will be human-to-human interactions. What's left over after the AI does its work are the complex cases, and in recognition of this, some contact centers are removing the handle-time constraints for these interactions. As one panelist noted, "It is not a stopwatch exercise at that point." Also remember that humans are generating the data from which the AI learns. Today, the human handles the customer logs/recordings used as training sets for the AI.

  3. What are the best practices for automation anxiety? First off, you have to choose the right use cases and make sure you’re delivering value. Many companies push bots down the customer's throat; applying design thinking will result in more careful designs that don't annoy the customer. Generating and tagging data is critical, but efforts to automate tagging data fed into the AI learning set have been abysmal. Consider solutions that can leverage conversational models that don't require so much effort to keep up to date.

  4. Identify priority projects and keep them simple. Wrap an AI project into a straightforward pilot or proof of concept – and make it data-driven. Look at your existing agent chats and conversations to determine what to automate, for example.

  5. How to address trust with consumers? Don’t fake a bot as a human interaction – customers will notice because bots and AI in general are very poor at empathy. You also have to be aware of relevant laws. In some states, you must tell people they’re speaking with a bot. Likewise, you need to let people know when they’ve transitioned from bot to a human.

 

What It Really Takes to Put AI to Work in Your Contact Center

I billed this session, which I moderated, as technically intermediate, and the panelists didn’t disappoint. Callan Schebella, of Inference Solutions; Jeff Gallino, of CallMiner; and Adam Champy, of Google, shared deep insights into how organizations can successfully put AI to work in their contact centers. Attendees learned:

  1. The barriers to using AI are falling. For example, doing speech translation and NLP used to be difficult and only very large organizations could afford it. Now, with modern tools and platforms, even small and medium-sized businesses can use it. At the time, deployment has shrunk from months to hours or days.

  2. Bots don’t work very well in proprietary domains and when customers get emotional. The reasons they don’t work well in proprietary domains is that the vendors have trained off-the-shelf speech-to-text and NLP engines using “everyone’s data.” This means they won’t work well recognizing specific terms, phrases, or product names or concepts. For proprietary domains, you’ll need to plan on training the speech and NLP platforms you use. As noted above, bots and empathy don’t mix very well. In fact, having a bot say, “I understand you’re unhappy,” can even make people angrier or more frustrated than they had been when beginning the interaction. They recognize the company is being disingenuous; they know a bot is a program that can’t really understand emotion. Bear in mind, too, that anger detection doesn’t equate to understanding, and an intelligent agent should simply pass the transcript of the conversation and the anger sentiment to a live agent.

  3. Bots have become simple to build, but that has led to many poor designs and deployment challenges. The user interface, including voice cadence and inflection, are very important, yet bot developers tend to be myopic with respect to their own products or services. They need to get real-world data on how customers will speak and act and use this data to train their bots accordingly. They must also remember that bots aren’t IVR: Deployment isn’t the end. Bots need continuous updating and the organization needs to be prepared to evaluate performance continuously and make tweaks or adjustments to insure adequate bot performance.

  4. When you use bots, you must be sure the customer can make progress toward problem resolution. The last thing you want is the “perfect voice” interface but a frustrated user. In other words, don’t lock your users inside a bot. Make an easy escape route for them. And here’s another tip: Avoid creeping users out by giving too many hints as to how much the bot knows about them.

  5. Avoid tools that don’t have sufficient experience and depth. It seems that some new AI tool or company appears almost weekly, but you’ll usually find many lack some core component you’ll ultimately need. Consider this CallMiner graphic below; as it shows, we’re in generation seven of speech analytics tools, but the foundational capabilities are essential at levels five or six, too -- and without them an AI project will be difficult or impossible to accomplish successfully.

  6. Integration in the workflow is critical. Building a standalone bot isn’t likely going to generate much success. To be powerful, bots must be accurate in determining “intent” and must integrate well with the tools in the workflow.

  7. To get started, identify how AI might solve a well-scoped and meaningful problem, and then put your team in place. You’ll need to connect to critical backends and build for scalability. And, lastly, enable the organization for your new, automated processes.

CallMiner_speech-graphic.png

 

Click below to continue to next page: Building One Chatbot Across All Your Channels, and more

Building One Chatbot Across All Your Channels

In this session, Daniel Hong of Forrester Research and panelist Taj Singh, with Uber, discussed a number of interesting topics, including:

  1. Why chatbots sometimes don’t work. They attributed chatbot failure to three reasons: having siloed teams, based on channel, which can drive up costs and lead to consistent user experiences; lack of experience and training in how customers might say the same things in different ways; and platform issues, such as an inability to scale and support multiple channels.

  2. Standardize on a good NLP engine like Google’s Dialogflow or one of the other reputable platforms.

  3. Consider how you’re going to do authentication across different channels. How will this vary depending upon which channel the customer uses for engagement?

 

Companies at Enterprise Connect with an AI Component

Beyond the conference program, AI found a home on the Expo floor, too. It was interesting to walk the show floor and see how many exhibitors had some element of AI as part of their product offerings. The list below are those AI-focused companies I personally saw or encountered on the show floor or in session presentations (my apologies to any that I missed).

AIatEC.PNG

 

Conclusion

As evidenced in this EC19 recap, AI is all around us. It is only going to get hotter as more and more organizations try to use it in their contact centers and as they encounter it in the UC products they deploy. AI is keeping our industry relevant, fun and challenging; everyone will engage AI and be engaged by it.

About the Author

Brent Kelly

Brent Kelly is a principal analyst for unified communication and collaboration within Omdia’s Digital Workplace team.

Since 1998, Brent has been the principal analyst at KelCor, Inc., where he provided strategy and counsel to CxOs, investment analysts, VCs, technology policy executives, sell-side firms, and technology buyers. He also provided full-time consultancy to Wainhouse Research and Constellation Research. With a PhD in chemical engineering, Brent has a strong data background in numerical methods and applied artificial intelligence with significant experience developing IoT and AI solutions.

Brent has a Ph.D. in chemical engineering from Texas A&M University and a B.S. in chemical engineering from Brigham Young University. He has served two terms as a city councilman in his Utah community. He is an avid outdoorsman participating in cycling, backpacking, hiking, fishing, and skiing. He and his wife own and operate a gourmet chocolates manufacturing company.