Sponsored By

AI, Racism, and Bias: The Impact on Employees and CXAI, Racism, and Bias: The Impact on Employees and CX

While AI bias is a real issue, AI also can be a tool to combat racism and abuse in the contact center and the larger enterprise.

Conrad Liburd

November 16, 2020

4 Min Read
An AI circuit
Image: Kaikoro - stock.adobe.com

Artificial Intelligence (AI) is no longer just a buzzword being discussed in theory — we’ve reached a point where AI is delivering real, measurable impact across a wide range of industries. The AI solutions being used today are automatically capturing and making decisions based on enormous amounts of data — from supporting healthcare diagnoses and detecting fraud in financial services to improving how organizations engage with customers.

 

The application of AI to improve employee and customer experiences is one of the most powerful use cases. Many organizations already have a trove of customer interaction data at their fingertips — whether it’s via call center interactions, chatbots, or other channels — and AI has the power to turn that data into insightful intelligence that boosts customer retention, improves consumer loyalty, and engages employees.

 

But when AI is influenced by a flood of data with no human management, it has the potential to go in directions we don’t expect. Some of those instances are easy to brush off, like robots trying to escape their labs to reach freedom, but not all unintended consequences of AI are so benign. Remember the Microsoft Twitter bot that accidentally learned to act racist?

 

Particularly in the customer service segment, it’s inevitable that AI will ingest data reflective of the ugliest parts of society, like bias, racism, and sexism. Companies know this — and many decide the best solution is to put guardrails in place to mitigate and eliminate data bias.

 

I believe there is a better way. This data reflects what is, but it doesn’t have to be. Is AI giving us the opportunity to be more human? What if, instead of editing it out, we kept it (for now), operationalized it, and actually learned something from it?

 

Taking a Different Approach to Biased Data

Most organizations — at least those that have the capability to spot abuse, bias, and racism in real-time — default to forgoing analysis of these instances. This means they aren’t factored into go-forward decisions regarding customer service and the employee experience. On some levels, this makes sense. But what about the employee who was on the receiving end of the abuse or racism? What about the bottom-line impact on the entire organization and brand reputation?

 

Imagine you’re a customer service representative. You take a call, and the words you hear from the consumer are, “You sound Black. I want to speak to a white person.” If that sounds hypothetical to you, it’s not. It actually happened to me when I was a call center agent. And I’m definitely not alone.

 

Interactions like this occur more often than we like to admit — and have a significant impact on employees. In my experience, most racist interactions impact agents for 30-45 minutes after the call has concluded, yet during that time they are still being graded and scored on their performance. For agents, resentment festers, and trust that the next caller will be more human erodes. Many think, “Did no one hear what this customer just said?” They are on emotional guard and only able to give the bare minimum to each call, resulting in getting the bare minimum for performance scores. Talk about a one-two punch.

 

When harmful interactions occur, an organization’s top priority must be to protect and defend frontline employees. Leaders must recognize if, when, and why their agents are facing these situations so they can take real action. These insights can help brands make important decisions — like banning abusive customers, equipping employees with real-time, situational guidance to respond to abuse, and flagging when a supervisor needs to intervene, as well as give the employee a break and support after difficult engagements. This is simply impossible unless a brand is monitoring 100% of its customer interactions in real-time. The unfortunate reality: Most organizations still only analyze 3–10% of their customer interactions. This is a mistake from a business perspective as well as dangerous (and potentially negligent) from an employee experience perspective.

 

AI For Good: How Organizations Can Drive Change

Your customer engagement and interaction platforms — the call center, chats, social media, and more — are the epicenter of any business. The interactions that occur between your brand, employees, bots, and customers have far-reaching implications on customer experience, brand reputation, employee retention, and the entire organization. Unfortunately, they are also a platform for ugliness.

 

Being transparent about bias, racism, and abuse leads to greater awareness of existing problems and, eventually, solutions. While many experts have rightfully pointed out the potential bias associated with some AI algorithms, AI can also be an enabler in our journey to spark change — when deployed correctly and thoughtfully. The correct way isn’t blinding yourself to what’s bad and ugly; it’s generating deep understanding to drive meaningful action, from improved employee experience to an organization built on ethics and fairness.

 

With humans learning about how and why these problems occur, we have a better avenue for instigating change, improving not only the customer and employee experience but also systemic issues at large.

About the Author

Conrad Liburd

Conrad is currently a feature engineer with the CallMiner Research Team, where his 10-plus years of call center experience and programming skills are leveraged to help the migration of research projects from research to reality. He is the creator of the CallMiner Chrome extensions and possesses a strong technology background, specializing in business process optimization through the use of web application extensions and various technology suites and frameworks.