The Problem With Biased AIs (and How To Make AI Better)

AI has the potential to create tremendous business value for organizations, and its adoption has been accelerated by the data-related challenges of the pandemic. Forrester estimates that by 2025, nearly 100% of companies will be using AI and the artificial intelligence software market will reach $37 billion in the same year.

But there are growing concerns about AI bias – situations where AI makes decisions that are systematically unfair to certain groups of people. Researchers have found that AI bias has the potential to cause real harm.

I recently had the opportunity to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to ensure their models are fair.

Why AI Bias Happens

AI bias occurs because humans choose the data that algorithms use and also decide how the results of those algorithms are applied. Without extensive testing and diverse teams, unconscious biases can easily permeate machine learning models. Then AI systems automate and perpetuate these biased models.

For example, a study by the US Department of Commerce found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.

According to a UC Berkeley study, several mortgage algorithms at financial services firms have consistently charged higher interest rates to Latino and Black borrowers.

According to Kwartler, the business impact of biased AI can be significant, especially in regulated industries. Missteps can result in fines or jeopardize a company’s reputation. Companies that need to attract customers must find ways to get AI models into production in thoughtful ways and test their programs to identify potential biases.

What a better AI looks like

Kwartler says that “good AI” is a multidimensional effort across four distinct personalities:

AI innovators: Executives or executives who understand the business and recognize that machine learning can help solve problems for their organization

AI Creator: The machine learning engineers and data scientists who build the models

AI implementer: Team members who integrate AI into existing tech stacks and bring it into production

AI consumer: The people who use and monitor AI, including legal and compliance teams that deal with risk management

“When we work with clients,” says Kwartler, “we try to identify those personas in the organization and articulate risks a little differently for each of those personas so that they can gain trust.”

Kwartler also talks about why “humble AI” is crucial. AI models must show humility when making predictions lest they stray into biased territory.

Kwartler told VentureBeat, “If I classify a banner ad as 50% or 99% probability, that’s something like the middle range. Above this line there is a single cutoff threshold and you have a result. Below this line you have a different result. In reality we are saying that there is a space where you can apply some reservations, so a human has to check it. We call this humble AI in the sense that the algorithm shows humbleness when it makes the prediction.”

Why it is important to regulate AI

According to DataRobot’s State of AI Bias report, 81% of business leaders want government regulations to define and prevent AI bias.

Kwartler believes that well-thought-out regulation could eliminate much of the ambiguity and allow companies to move forward and realize the enormous potential of AI. Regulations are particularly critical in high-risk use cases such as educational referrals, credit, employment, and surveillance.

Regulation is critical to consumer protection as more companies embed AI into their products, services, decision-making and processes.

How to create an unbiased AI

When I asked Kwartler for his top tips for organizations looking to develop unbiased AI, he had several suggestions.

The first recommendation is to educate your data scientists on what responsible AI looks like and how your business values ​​should be embedded in the model itself or the model’s guard rails.

In addition, he recommends transparency to consumers to help people understand how algorithms make predictions and make decisions. One of the ongoing challenges of AI is that it is viewed as a “black box” where consumers can see inputs and outputs but have no knowledge of the internal workings of the AI. Businesses need to strive for explainability so people can understand how AI works and what impact it could have.

Finally, he recommends companies set up a grievance process for individuals to give people the opportunity to discuss with companies if they feel they have been treated unfairly.

How AI can help save the planet

I asked Kwartler about his hopes and predictions for the future of AI, and he said he believes AI can help us solve some of the biggest problems humanity is currently facing, including climate change.

He shared the story of a DataRobot customer, a cement manufacturer, who used a complex AI model to make one of their plants 1% more efficient and helped the plant save about 70,000 tons of CO2 emissions every year.

But to realize the full potential of AI, we need to make sure we work towards reducing bias and the potential risks that AI can bring.

To keep up to date with the latest trends in data, business and technology visit my Books Data strategy: how to capitalize on a world of big data, analytics and artificial intelligenceand make sure to subscribe to my newsletter and follow me Twitter, LinkedInand youtube.

Leave a Reply

Your email address will not be published. Required fields are marked *