
While us humans are subject to human failings (such as bias and irrationality), it’s long been thought that one of the benefits of artificial intelligence (AI) is that it lacks such flaws.
As it turns out, AI is of course built by humans, so there is evidence that AI can fall victim to similar troubles as its human creators. This is indeed troubling, particularly when it generates embarrassing or off-putting responses to customers after even the simplest of prompts.
Generative AI – the technology behind AI such as ChatGPT -- brings forth a new realm of possibilities for brands to enhance the experiences they deliver to their customers; to ensure its success, however, it has to respond accurately and be free of harmful content. That said, generative AI’s propensity to occasionally deliver inaccurate or nonsensical information -- a phenomenon known as a hallucination -- could potentially impact hard-won customer loyalty.
A recent study commissioned by digital customer experience company TELUS International uncovered customer concerns regarding bias in AI algorithms and perceived lack of transparency in the utilization of generative AI. The results shouldn’t be particularly surprising, considering how quickly AI use has ramped up in the past six months.
- More than two in five respondents (43%) believe bias within an AI algorithm caused them to be served the “wrong content,” such as music or TV programs they didn’t like and irrelevant job opportunities.
- Almost one-third (32%) believe bias within AI algorithms caused them to miss out on an opportunity, such as a financial application approval or even a job opportunity.
- Close to half (40%) of American consumers don’t believe companies (who are using generative AI technology in their platforms) are doing enough to protect users from privacy worries, bias and legitimately false information.
What’s the solution? Customers have an idea on this that puts greater responsibility on businesses looking to implement GenAI in their communications. Specifically, more than three-quarters (77%) believe that before brands integrate generative AI into their platforms, they should be required to audit their algorithms to mitigate bias and prejudice.
“With the rise of generative AI, the need for good and fair data has become more important than ever. Unlike traditional AI, generative AI creates new outputs based on the data it has been trained on, magnifying the impact of data quality on its overall performance,” said Siobhan Hanna, Managing Director of AI Data Solutions at TELUS International. “It is crucial that companies proactively address biased data and reckless algorithms from the start to avoid severe consequences and inaccurate outcomes.”
Hanna noted that model validation and tuning are essential for improving the performance and reliability of AI models as they help identify and address potential errors, improve accuracy and ensure that the model can effectively adapt to and make accurate predictions on new, previously unseen data. By implementing appropriate policy guardrails, companies can protect customer data and promote a safer user experience while mitigating undue hallucinations and bias.
Edited by
Alex Passett