Sponsored By

Enterprise Connect AI: Practitioners Advise Approaching AI With CautionEnterprise Connect AI: Practitioners Advise Approaching AI With Caution

AI’s impact ranges from the contact center to customer experience, but it comes with risks that need to be acknowledged and mitigated.

Matt Vartabedian

October 3, 2024

7 Min Read
A graphic that says AI
Image: Tierney - stock.adobe.com

During day two of Enterprise Connect AI, presenters with Deloitte shared their perspectives on how their clients are approaching the integration of AI into existing technology and systems. Later that morning, another pair of presenters spoke about the potential impact of AI on contact center employment and provided some thoughts about how to possibly mitigate the risk associated with implementing Gen AI.

The Deloitte Keynote: Profitability and Performance Are Not Always Straightforward

Two speakers from Deloitte, Vinita Kumar, Business Development, and Dexter Zavalza, Conversational and Generative AI UX Design Lead, shared some of what they had learned from customer deployments.

Zavalza said that many of his clients find themselves wrestling with finding the integration points among older, on-premises solutions and new CCaaS and/or standalone AI solutions. “We often start the conversation by going back to the fundamental business case and asking: How will an AI enabled solution drive value for what they're trying to do as an enterprise?” he said.

The answer to that question can end up meaning that a given company may not really need, for example, a new LLM-based intent-classification system for an IVR at the scale of a contact center that has 50,000 agents waiting on the other side. “If we put the resources and the people in the right places, on a good old fashioned NLU model we can classify intents at a 90 to 95% [success rate],” Zavalza said. “That may or may not make sense longer term, but today, given cost constraints [of both legacy and new approaches], we want to make sure we're having those bigger conversations.”

Kumar noted that there’s a nuanced approached to profitability, the cost of deployment and the experience ultimately delivered to customers – which is the whole point. “You can use AI to drive down cost, but you're constantly balancing that [deployment cost] with either maintaining the [customer] experience – or maybe even making it better,” Kumar said.

One new capability enabled by AI and large language models is the ability to collect all of a customer’s intents (e.g., change address, pay a bill, add users to a credit card), park those intents and then come back to them. He likened conventional, DTMF or branch-based systems, and even many ‘conversational AI’ bot systems with programmed paths, to putting customers on ‘rails’ where they cannot necessarily accomplish multiple tasks within a single session without bouncing back to the main menu or, worse, calling in again.

“Now that we're able to use the integrations of traditional NLP models with LLMs, we can get all that intent input. So we can take the second intent, park it, and come back to it,” Zavalza said. “We're starting to move away from the ‘train on the tracks,’ to the drone, where we can go up and come down in different place. That's really interesting from a design perspective.”

Multimodal interactions between customers and agents – voice, text, chat, video, etc. – can make those interactions not only better but also more effective. For example, maybe the agent must ID verify the customer. Doing so via a text link to a secure, temporary session that authorizes the customer’s phone and allows them to take a phone of their driver’s license and then upload it where, on the back-end, OCR and/or AI or automatically enter that information into the correct fields can be much more effective than asking the customer to speak the DL number, as well as a huge time saver.

“That self-service aspect often has a direct impact on the agent experiences,” Kumar said. “And, more broadly, if AI handles those higher volume, less complex interactions, then often [what we see] is that the average handle time for the agents – who are now handling now more complicated interactions – increases. So our clients are thinking about agent-facing generative AI tools – summarization, avoiding the need to re-authenticate callers, etc. – they can deploy to help maintain or decrease the average handle time.”

Insights from Self-Service Strategies: Get Rid of Agents—Or Chatbots?

Raj Gajwani, Manaing Partner at Day 0, and Neil Shah, Technologist in Residence, Generative AI, US Digital Response, spoke about the confluence of LLM-powered chatbots, contact center agents and good user experience design and outcomes.

Since the debut of ChatGPT, and the integration of various LLM-powered solutions into CCaaS and UCaaS platforms, one of the driving debates has been around the loss of jobs. For example, because Gen AI solutions exist, and are getting better at doing what humans can do, human customer service agents will lose their jobs.

“It's our belief that you won’t have human customer service go away because of these use cases. We've already replaced people with facts and wikis, and there's more call center reps than ever,” Gajwani said. “In terms of number of customer touch points with a company, there's going to be more customer touch points on self-service, and a larger fraction of those will be automated. But, we think that the volume of human customer service will go up.”

Gajwani also expects that as customer service costs come down and the tools get better, consumers will expect to have more and better customer service from the companies they interact with. Moreover, departments and divisions that don’t have ‘call centers’ now may end up having call centers if, for example, one person can handle many calls a day. This goes to a point raised by Omdia analyst Mila D’Antonio who wrote last year about connecting a customer directly to an internal knowledge worker who has the answer to their question.

“The implications for those who run customer service organizations will be that you may have fewer agents, but you’ll definitely have better, trained customer service reps,” Gajwani said. “You'll go from Tier 1-2-3, to maybe Tier 2-3-4 support, and tier one becomes even more automated, and those better trained CX reps have essentially robot exoskeletons. They’ll have more power.”

One of the reasons almost most speakers at EC AI said that ‘everyone’ is deploying this internally is that Gen AI doesn't work perfectly. That’s because, at least in part, it is probabilistic not deterministic. That is, people are used to computers working in a deterministic way where the computer’s application always produces the same output with the same data set. “Thinking about a probabilistic application has major differences in how you think about deploying computer applications. Frankly, I think it's a lot like managing an organization of people,” Gajwani said. “People are probabilistic too. Mostly, they do the right thing. So our management techniques for people may be related to the management techniques we have for AI tools.”

Gajwani then shared a story about a company that worked for about four years (via Google Moonshot) to develop an agriculture-focused AI that could identify what plants in a field needed water, pesticide, etc. This (now defunct) company spent hundreds of millions of dollars on that custom AI only to find that when they took the exact same agricultural dataset and, in the 2022 timeframe, fed it into Google Cloud’s latest (at that time) generic model available Vertex AI, that generic model performed better than the custom model.

“The state of the art is accelerating so fast that the generic solution in two generations may be better than the model you've so carefully customized and built,” Gajwani said. “It's a real risk and you should be careful about where you spend money.”

And given that pace of change, Gajwani and Shah suggested that the vendors who are building AI into their products – Microsoft with Copilot, Google with Gemini, Salesforce with their AI, etc. – may be the safest bet. “There's hundreds of billions of dollars being used to subsidize the instruction of AI into other people's products, and right now, they're all giving it away below cost,” Gajwani said.

However, that approach of perhaps relying on those big cloud and/or LLM providers comes with its own risk, as Shah elucidated. “The underlying models are constantly being changed. They are just shipping changes without necessarily informing you. To some extent, these LLMs are a big black box, and we don’t really know how they work. So, if you're just trusting this production model, something that you've built use cases around, [your results] may all of a sudden deviate. That's just the nature of it right now.”

About the Author

Matt Vartabedian

Matt Vartabedian is the Senior Editor of No Jitter. Matt is an accomplished cellular industry analyst, researcher, writer, and content creator with more than 25 years’ experience. His work includes authoring market reports, articles, presentations, and opinion pieces grounded in significant research, data analysis, and accumulated expertise for clients involved in various roles from business unit to C-suite executives. He can be found on LinkedIn here.