Why Recent AI-linked Lawsuits Will Impact Your Contact CenterWhy Recent AI-linked Lawsuits Will Impact Your Contact Center
As a recent class action lawsuit against Patagonia shows, solution providers and organizations face a growing number of challenges in deploying AI for customer service.
July 24, 2024
A proposed class action lawsuit against retailer Patagonia filed July 11th caught the attention of many of us who work in the contact center market – because it directly references how AI is used in the contact center. The plaintiff alleges that a call she made to Patagonia’s customer service line in January 2024 was “intercepted, listened to, recorded and used by an undisclosed third party, Talkdesk.” (Talkdesk, while it is referenced in the filing, was not named as a proposed defendant in the lawsuit.)
A simple internet search uncovers that similar lawsuits have been filed in 2024:
In May, a proposed class action lawsuit was filed against Navy Federal Credit Union. It alleges that, in addition to “wiretapping” and recording customer calls, a third-party – Verint - uses intercepted communications to enhance its products and train its AI models.
In March, a California resident filed a class action complaint against both Home Depot and Google, specifically Google’s Cloud Contact Center AI (“CCAI”) service. It alleges that anyone who called Home Depot customer service had their privacy violated when Home Depot allowed Google to access, record, read, and learn the contents of their calls.
According to global legal firm White & Case, there is currently no comprehensive federal legislation or regulations in the US that regulate the development of AI or specifically prohibit or restrict its use. However, there are existing state and federal laws that can be applied to the use of AI in the suits listed above. For example, all three of the cases are applying the California Invasion of Privacy Act (CIPA), which prohibits third-party monitoring and recording of telephone calls without prior consent. (Emphasis mine.)
Legacy Call Recording
Contact centers have been recording calls for decades and most have implemented an announcement in order to adhere to existing regulations. We have all heard, "This call may be monitored or recorded for quality assurance and training purposes."
In the past, when I heard this warning message before a call, I imagined a few calls being reviewed each month for each agent. I did not think of it as wiretapping or an invasion of my privacy. But I also did not think of my voice being transcribed in real-time, introducing the possibility of influencing agent behavior while on an active call – for example, if I am making a call to express dissatisfaction with a product, it’s not the time for an agent to segue straight into a sales pitch instead of addressing my issue.
Automated Quality Management and Beyond
Automated quality management (AQM) – where every contact center call is transcribed and scored automatically using artificial intelligence – has only become sufficiently technologically accurate and cost-justified for general deployment in the past few years. Generative AI improved the business case by making AQM easier and less expensive to deploy.
Beyond use in quality management, real-time transcriptions can now be used to go far beyond the traditional understanding of agent improvement. AI models can predict moment by moment the sentiment of the caller, e.g., is he/she angry, frustrated, being abusive to the agent? Armed with this information, the agent or supervisor can adjust their verbal behavior to change the caller’s sentiment to a more positive one.
From an even broader perspective, data from potentially millions of calls where transcriptions have been collected can be used to build the AI models that predict sentiment. The transcriptions can also be used to build AI-based models for a number of additional use cases, for example:
Next best action (NBA). Next best action is an analysis of many interactions that, when delivered to an agent during a call, can determine the best action to take for a specific customer during an interaction. For example, if a customer calls complaining about the poor battery quality on their mobile device, the agent could be prompted to suggest an upgrade to a newer phone.
Intent Mining. An AI-based capability to identify customers' intent by analyzing historical data, including agent-customer call recordings. Intent mining can further be used to identify the types of interactions that might be served using self-service bots.
Journey Mapping: Used to map out the customer interactions and relationship with a brand, in order to optimize marketing, sales, and customer service efforts. AI can also be used to identify patterns and trends in large datasets – e.g., the transcriptions from a month’s worth of caller data – to understand common customer behaviors and preferences for future marketing and product development efforts.
It remains to be seen how the aforementioned AI lawsuits will be settled. Over time, I do expect a need for regulation that specifically addresses the use of AI in the contact center. It seems unlikely that “this call may be recorded?” – with no detail of how the recording will be used – will or should suffice.
Change Required
Prior consent is a common theme in the lawsuits described above. No one will dispute that the message played to callers today does not adequately cover the various use cases created by AI technology applied to contact center call transcripts.
In cooperation with customer advisory boards and their legal teams, contact center solution providers should be proactively working on new language for announcements about recording of calls to suggest to its customers. To my knowledge, there is no standard practice followed when customers opt-out of having an interaction recorded – this may become necessary in the future as the meaning of informed consent is debated in the courts.
I believe this topic is an excellent one to be taken up by the ICMI Strategic Advisory Board. I believe it is also a topic that warrants exploration in conferences, e.g., ICMI Expo and Enterprise Connect.
The ethical use of AI in customer experience goes beyond gaining more meaningful prior consent for the use of call transcripts. Given the attention the consent is getting in recent suits, however, it seems a good place to start making changes.
They should also be thinking about what to do if customers opt out of recording -- what does that CX look like? And what are the legal obligations therein to giving consistent customer service?