Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


At the Movethedial Global Summit in Toronto yesterday, I listened intently to a talk titled “No polite fictions: What AI reveals about humanity.” Kathryn Hume, Borealis AI’s director of product, listed a bunch of AI and algorithmic failures — we’ve seen plenty of that. But it was how Hume described algorithms that really stood out to me.

“Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way,” Hume said. “They don’t permit polite fictions like those that we often sustain our society with.”

I really like this analogy. It’s probably the best one I’ve heard so far, because it doesn’t end there. Later in her talk, Hume took it further, after discussing an algorithm biased against black people used to predict future criminals in the U.S.

“These systems don’t permit polite fictions,” Hume said. “They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

Reflections and refractions

If an algorithm is designed poorly or — as almost anyone in AI will tell you nowadays — if your data is inherently biased, the result will be too. Chances are you’ve heard this so often it’s been hammered into your brain.

The convex mirror analogy is telling you more than just to get better data. The thing about a mirror is you can look at it. You can see a reflection. And a convex mirror is distorted: The reflected image gets larger as the object approaches. The main part that the mirror is reflecting takes up most of the mirror.

Take this tweet storm that went viral this week:

Yes, the data, algorithm, and app appear flawed. And Apple and Goldman Sachs representatives don’t know why.

Clearly something is going on. Apple and Goldman Sachs are investigating. So is the New York State Department of Financial Services.

Whatever the bias ends up being, I think we can all agree that a credit limit 20 times larger for one partner over another is ridiculous. Maybe they’ll fix the algorithm. But there are bigger questions we need to ask once the investigations are complete. Would a human have assigned a smaller multiple? Would it have been warranted? Why?

So you’ve designed an algorithm and there is some sort of problematic bias in your community, in your business, in your data set. You might realize that your algorithm is giving you problematic results. If you zoom out, however, you’ll realize that the algorithm isn’t the problem. It is reflecting and refracting the problem. From there, figure out what you need to fix in not just your data set and your algorithm, but also your business and your community.

ProBeat is a column in which Emil rants about whatever crosses him that week.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.