The hype around ChatGPT and other generative-artificial-intelligence technology is highlighting a continuing challenge for businesses: how to keep bias out of their own AI algorithms.
Businesses are putting huge amounts of time and money into reducing bias in the algorithms they have deployed. Tech leaders say it is easier and cheaper to address bias from the beginning than try to remove it later, but many companies lack systems, processes and tools for doing that.
“It’s more of a reactive mode than a proactive one,” said Neil Sahota, who advises the United Nations on AI, referring to the way organizations approach limiting AI bias. Mr. Sahota said this reactive mode comes with costs, given that retroactively limiting bias is such a difficult and expensive process.
“Companies aren’t going to pump an extra $10 million to strip out an extra bias or two that might impact 100 or 200 people,” he added.
Bias is an age-old problem for AI algorithms, in part because they are often trained on data sets that are skewed or not fully representative of the groups they serve, and in part because they are built by humans who have their own natural biases, Mr. Sahota said.
Issues with AI were highlighted when Microsoft Corp. said in February it would put new limits on the usage of its new Bing search engine, which uses the technology behind ChatGPT, after users reported inaccurate answers and sometimes unhinged responses when pushing the app to its limits.
AI systems have been found to be less accurate at identifying the faces of dark-skinned people, particularly women; to give women lower credit-card limits than their husbands; and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites.
Part of the problem is that companies haven’t built controls for AI bias into their software-development life cycles, the same way they have started to do with cybersecurity, said Todd Lohr, U.S. technology consulting leader at audit, tax and advisory services company KPMG LLP.
Flavio Villanustre, global chief information security officer at data and analytics company LexisNexis Risk Solutions, said bias problems would be more limited if companies addressed them upfront rather than deploying algorithms and then assessing the damage.
Mr. Villanustre said that once a model exists and shows some bias, especially in more complex deep learning models, it can be challenging to understand why it generated a particular answer. “It is absolutely difficult, and in some cases impossible—unless you can go back to square one and redesign it correctly with the right training data and the right architecture behind it,” he said.
While it might be straightforward to remove a variable like gender, which seems likely to create gender-biased responses, a variable like height might seem less obvious, but can function as a proxy for gender since women tend to be shorter than men, he said.
It is important for companies to address bias from the beginning, said Rajat Taneja, president of technology at card network Visa Inc.
“The responsible ethical use of AI and then the governance and the oversight you need in that is incredibly important,” he said. “And companies that are going through the journey have to be very aware of that and embrace it at the very get go because adding it later on makes it much harder to do.”
Mr. Taneja said that before any model is deployed at Visa, it is assessed by a model risk management organization and a team that tests for potential unintended impacts and ensures the model adheres to Visa’s principles of responsible and ethical AI.
Better guardrails and standardized frameworks could be part of the solution, said PepsiCo Inc.’s chief strategy and transformation officer, Athina Kanioura. She said PepsiCo has been working with other large companies to establish this type of industry framework, which would include a governance layer meant to ensure transparency and visibility and reduce algorithmic bias.
Dr. Kanioura also said PepsiCo chooses not to use AI for certain things because the risk of bias is so high, including hiring decisions.
Better tool sets for tracking and assessing bias in algorithms could also be helpful, and according to KPMG’s Mr. Lohr, more startups are offering AI management solutions that could address this: “I think the market is just at the tipping point where they’re all getting their Series A funding and you’re going to start seeing them live within the next six months.”