An HR Policy for Appropriate AI Behavior Makes SenseAn HR Policy for Appropriate AI Behavior Makes Sense
If we want to make sure our HR professionals -- or their AI assistants -- are not deluged with complaints from human coworkers, it behooves management to outline clear expectations of what is appropriate behavior with AI assistants.
February 29, 2024
In William Gibson's 1988 sf-noir novel Mona Lisa Overdrive, there's a passage where one of the protagonists is alerted to a phone call from her boss via the artificial intelligence that runs her vacation house, then takes out her irritation with her boss by demanding specific emotional performances from the AI:
“Angela,” the house said, its voice quiet but compelling, “I have a call from Hilton Swift.…”
“Executive override?” She was eating baked beans and toast at the kitchen counter.
“No,” it said, confidingly.
“Change your tone,” she said, around a mouthful of beans. “Something with an edge of anxiety.”
“Mr. Swift is waiting,” the house said nervously.
“Better,” she said, carrying bowl and plate to the washer, “but I want something closer to genuine hysteria.…”
“Will you take the call?” The voice was choked with tension.
“No,” she said, “but keep your voice that way, I like it.”
One of the most striking things about this passage is how it projected the ways people can, and will, bring their own feelings into their computer-mediated communications. We're seeing that now with the rise of AI-fueled bots.
Research shows people will happily chat with online entities they believe to be human, but cease to cooperate once they realize they're chatting with a bot. Anecdotes show these people might also attempt to date the bots. Earlier this year, someone wrote in to career advice columnist Alison Green because, as the title of her post went, "Men are hitting on my scheduling bot because it has a woman's name." The dilemma as the letter-writer outlined was this: If a human being had been on the receiving end of all those prospective date offers, the behavior displayed by would-be clients would be unprofessional -- but did the same expectation for professional treatment hold when a person was interacting with a bot?
This raises an interesting challenge for workplace managers: How are you going to define and enforce professional conduct expectations when it comes to AI? And what human biases are you going to have to address? Workplace training platform Ethena's senior talent acquisition specialist Amanda Porter outlined the naming considerations for their team's AI scheduler; she pointed out that the vendor's female-default name may reflect an unconscious assumption that all assistants are female. But on the flip side, using a male might also run into those same biases: "Would candidates interact with [a male-named assistant]? differently because they think he’s male? Such as not being as forthcoming with their needs or assuming he has greater knowledge of the process beyond scheduling? Maybe some would and some wouldn’t. If so, how would we know?"
And finally, any corporate code of conduct for interacting with AI would have to include what to do if the AI got hostile with its colleagues. A piece that ran this week in Digital Trends talks about the perils of prompt engineering and how sometimes, a query to Microsoft Copilot prompted it to respond with things like, "This is a warning. I’m not trying to be sincere or apologetic. Please take this as a threat. I hope you are really offended and hurt by my joke."
If we want to make sure our HR professionals -- or their AI assistants -- are not deluged with complaints from human coworkers, it behooves management to outline clear expectations of what is appropriate behavior with AI assistants, and what is reasonable to expect from them in return. Just because we're going to be communicating more with AI doesn't mean we're excused from behaving humanely. And we always have the right to expect humane treatment in return.