Humanizing AI: Striking the Right Balance for Customer Engagement

Written By: Shamel Addas, Professor of Digital Technology, Smith School of Business

Companies have been utilizing chatbots for decades, but it’s only in recent years their popularity has skyrocketed thanks to advances in artificial intelligence.

Modern AI agents can collect and analyze customer data and mimic human conversational behaviour to execute on a variety of customer service and support tasks, from answering routine questions to offering product recommendations.

When designed right, AI agents can deliver significant value for companies, in the form of enhance efficiency, boosted sales, and improved customer engagement and loyalty. But, if a company’s AI agent design misses the mark, it can seriously damage the brand.

Humans (i.e. your customers) aren’t rational decision makers. They don’t just pay attention to the content being given; they look at the mode of delivery as well. Various aspects of an AI agent’s design – its social and peripheral cues, messages and features - send signals to users about the experience they should expect and play a significant role in how much users engage, trust and act on the advice.

So, while the capabilities and design features available with modern AI agents are exciting, research shows there are cases in which making AI agents more ‘human’ can backfire.

For example, my own research examining the effects of anthropomorphizing AI roboadvisors has revealed that while AI agents with higher verbal social capabilities are more effective when paired with a humanlike avatar image, once you start adding animation – blinking, head movement, and lipsyncing – users begin to feel uncomfortable. Interestingly, focusing on increasing the social capability of these animated AI agents rather than their appearance can actually mitigate the feelings of discomfort that users can experience interacting with AI and increase their sense of social presence.

Small mistakes can undermine the entire experience, so it’s important for companies to take the time to carefully consider their objectives and select an AI agent model and delivery that suit their purpose, their delivery channel, and their customers’ service expectations. A more basic virtual agent, like that used by Expedia, might be a good customer support option for businesses with a clear understanding of the common questions and tasks their customers require assistance with. Conversely, a more sophisticated AI agent might be well suited for a retail setting, such as L’Oreal, where you want your customer to engage more with your products and brand.

So, how do you build an AI agent experience that balances form and function?

Research – including my own – indicates you should focus on trust.

Many studies on human-AI interaction show that trust is a key mediator of business outcomes. In our case, the outcomes we looked at were recommendation acceptance (the likelihood of customers accepting the AI agent’s recommendations), self-disclosure (how likely users are to provide personal information relevant to the transaction), usage intention (the chances that consumers will use the tool again), and overall satisfaction.

The dimensions to building trust in AI agents are somewhat similar to those involved in facilitating trust in human-to-human interactions:

Competence: users want to know that the AI agent knows its stuff; It has the expertise and the credibility. The response failure/error rate really matters. You can technically set up an AI agent and integrate it into your website in 15 minutes, but it’s going to have problems because it’s not trained. When an AI agent is repeatedly asking for clarification or doesn’t understand the text prompt, those failures really bug people and lead them to have less trust in the offering. By contrast, AI agents with domain expertise, and carefully trained with supervised machine learning techniques, can be perceived as more trustworthy.

Benevolence: AI agents don’t have the agency intentions that humans do, but their design elements can still signal to users that they are there to help. Matching the task would be helpful here. This is also where knowing your customer comes into play. How do they want to be spoken to? What information are they looking for? You risk eroding trust if a user can’t find the information they’re looking for or feel awkward because the interface doesn’t match their use case.

Integrity: When it comes to an AI agent, integrity means that customers don’t feel that it’s trying to deceive them. Trying to deceive or mislead users into believing they’re speaking with a human, or leaving any kind of uncertainty or ambiguity around whether they are, is a surefire way to lose trust. Being up front and setting expectations really help. Make it very clear that users are speaking with an AI agent with specific capabilities and limitations and offer a clear way out if they need to speak to a human.

Perhaps the most important takeaway is that businesses should take the time to do it right because, when it comes to AI agents, small mistakes can undermine the entire customer experience and negatively impact your bottom line.

Next
Next

6 Winning SEO Strategies to Enhance Your B2B Marketing Presence