Skip to content
HackerRank Launches Two New Products: SkillUp and Engage Read now
Join us at the AI Skills & Tech Talent Summit in London! Register now
The 2024 Developer Skills Report is here! Read now
Stream HackerRank AI Day, featuring new innovations and industry thought leaders. Watch now
Artificial Intelligence

A Practical Approach to Detecting and Correcting Bias in AI Systems [The New Stack]

Written By Sofus Macskassy | April 22, 2019

The New Stack's logo

The following piece on bias in AI was originally published in The New Stack by Sofus Macskássy, VP of Data Science at HackerRank. 


As companies look to bring artificial intelligence into the core of their business, calls for greater transparency into AI algorithms and accountability for the decisions they make are on the rise.

That makes sense: If people are going to rely on AI to make important decisions with real world consequences, they need to trust it. But trust comes in many forms — and that makes it difficult to pin down. First, AI needs to explain why it made a particular recommendation. That builds trust because people understand the reasoning. Deeper levels of trust come from knowing that a system is fair and unbiased. Showing this part is much harder.

This leaves companies in a tough spot when it comes to leveraging AI: they can either fly blind or fall behind. In 2018, Amazon — a clear frontrunner in AI — shut down its experimental AI recruiting tool after the team discovered major issues with bias in the system.

What’s needed is a more practical approach. Here’s what 15 years building AI and machine learning models at companies like Facebook and Branch, and now HackerRank, has taught me about detecting and correcting bias in AI systems.

Read the full article at The New Stack.

Banner reading "Using Machine Learning to Drive Recruiting Performance"

 

How Akamai Leverages AI to Combat Bias and Enhance Hiring

How Akamai Utilizes AI to Eliminate Bias and Improve Tech Hiring