The Impossible Goal: Bias-Free AI

I was listening in on a need-finding call recently with one of our sales execs. The prospect wanted to know what we do to “eliminate” bias from our AI. My first thought was, is he serious? It turns out he was. When asked to clarify, he explained that one of our competitors informed him that they have “bias-free” AI, and he wanted to know if we did too! With mouth agape (thankfully camera was off), I resisted the temptation to leap from my fly-on-the-wall perch onto my soapbox. Allow me to set the foundation for my argument that it’s impossible to create bias-free AI. I’ll follow with a case for an achievable goal: Fair AI.

Whether we’re talking about Artificial Intelligence (AI) broadly or Machine Learning (ML) specifically, we must understand that any artifact of either is a model or approximation of something in the real world. If it’s not, then we’re not talking about AI. One of the ways AI practitioners assess the quality of their models is to measure how often their models make mistakes. Given a set of inputs, does the output of the model match what’s expected? The error rate is never 0%. That’s the starting point for our understanding of bias in AI.

Why is the error rate never 0%? To clarify, I’m referring to the error rate of a model after it has been trained (models have to learn before they are useful). How does it behave when I feed it data that it’s never seen before? There are many ways bias creeps in, and it starts with the data. I’m going to use a simple example to illustrate.

Let’s say I want to build a model to predict home prices. I start by gathering data about home sales in my area. Since I’ve bought and sold homes in the past, I have some intuition about what affects the price. Based on my experience, the bigger the house, the higher the price! Have you spotted the bias yet? I move on to train a model. I gather hundreds of examples of home sales over the past year and plot house size versus sale price. Did you spot the bias that time? I see that the plot forms what looks like a cigar-shaped mass of dots that somewhat cluster around an imaginary straight line that gradually rises from small/low-priced homes to large/high-priced homes with a few random dots here and there that seem out of place (called outliers). Let’s say my friend, who knows I like to create price prediction models, wants to sell his home and asks me to help him set the sale price. Of course, I say sure! But, before I do that, I need to create the “model” from the data.

This is a good time to elaborate a bit more on what we mean by a model. For this example, I want to create a formula that maps a home’s size to an asking price. This is a machine learning model (sometimes called a function approximation), which is just a model that learns from data (rather than being programmed to mimic an expert, for example) and makes a prediction. I need to turn the cigar-shaped blob into a function that I can use to make the prediction. To me, that blob of dots looks kind of like a line, and I decide that the function should be a line (remember from high school algebra that the formula for a line is y = mx+ b – don’t worry if you don’t) Did you spot the bias that time? A hint: what if it’s not a line? What if I collected more data for even bigger houses and the cigar developed an upward hook looking more like a hockey stick? A straight line model would no longer work, and I’d need a different model altogether. Spot any bias here?

We’ve learned two important things so far: 1) the data might be biased, and 2) the model might be biased. How do we know for sure? The only objective way to assess whether bias is present is by tracking how many times the model makes a mistake after being trained. Intuitively, I hope it is clear from the example above how bias creeps in. To truly eliminate bias from AI, we’d need a way to perfectly identify all of the factors (called features in AI) that describe the thing in the real world that we want to model (a human’s performance, for example); collect enormous amounts of error-free data to train a model; and then with 100% accuracy predict something (maybe future performance scores or whether or not someone will quit in the next few months). Today, despite all of the advances in AI, that is an impossible goal. Will it ever be achieved? The short answer is no.

The Achievable Goal: Fair AI

In Human Capital Management (HCM), we’re always looking for the answers to questions like: how will my team perform during the next performance review cycle, will my top performers leave the company in the next few months, what skills will help my team succeed next year, and the year after, how can I build a bench of talent for a critical role, how can I help my employees achieve their career goals, and much more. AI can help answer all of these questions!

If you agree with me that bias-free AI is an impossible goal, then what is possible? What is an achievable goal? It starts with good clean data, followed by creating a model that performs at a level that satisfies the community it was built to serve. Finally, it must learn and improve with time. I’m calling this goal “Fair AI.” Others call it Responsible AI1. It requires the development of a strategy founded on fairness with the intent to minimize bias. Understand that software vendors cannot solve this problem entirely for you and it will take an investment in time, money, and the development of expertise within your organization.

Other considerations: Fear, Uncertainty, and Doubt

Fear

While I believe that there are many valuable and practical applications of AI in Talent Management, it’s essential to keep in mind that the technical buyer (the domain expert who is selecting the solution based on “fit-for-purpose”) may have concerns about the potential for an AI-based solution to be at odds with their goals. Consider the case where the problem to be solved is engagement or retention. How might the buyer perceive AI as an aid or as something at odds with such goals? A 2018 survey found that 73% of adults believe that AI will replace more jobs than it creates, and 23% are concerned about losing their job to an AI2.

Uncertainty

Bias in AI is of particular concern in Talent Management. Consider the case where data about current employees that are demographically similar is used to build a model to predict the performance potential of job applicants that are from demographic groups that are distinctly different from the model’s training cohort. Will such a model make predictions that lead to hiring decisions that exclude otherwise qualified candidates? For example, “In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination because the computer program it was using to determine which applicants would be invited for interviews was demonstrated to be biased against women and applicants with non-European names.”3

Doubt

Bias is the Achilles heel of Talent Management solutions that employ AI in any form or fashion, and it must be addressed explicitly. Vendors must invest in systematically minimizing bias. The product (design, engineering, and test) and data science teams must define a formal system for identifying and minimizing bias from their designs, code, training data, and the like. Marketing literature should educate buyers about the risk of bias in AI-based solutions and how best to evaluate a vendor’s anti-bias methodology. Likewise, sales teams should be equipped to educate and challenge a prospect’s assessment of competitive solutions that don’t employ a well-defined bias-minimizing methodology.

In Conclusion

Let me share a dirty little secret about AI; it’s not as mysterious, elusive, or technically out-of-reach as you may have assumed or have been led to believe. Hopefully, my simple example gave you a bit of confidence that anyone can grasp the basics. This is important because you, as a buyer of HCM software that claims to be AI-driven, must be able to evaluate the claims made by vendors and challenge those claims when they are misleading or false (like bias-free AI).

The application of Artificial Intelligence in Talent Management is obscured by marketing hype and misinformation. My goal for this article is to arm you with the knowledge that you need to challenge vendors who claim to have “bias-free” AI. I also hope that I’ve inspired you to move toward supporting initiatives within your company and outside that focus on the achievable. The next big thing will be “fair” AI, not bias-free AI. Develop a “fairness” strategy to minimize bias. Understand that vendors cannot solve this problem completely for you and it will take an investment in time, money, and expertise beyond software.

Despite much hype and misinformation, there are many promising applications of AI in Talent Management that are worth investigating. Don’t give up on AI! There are many excellent AI-based solutions in the marketplace that can automate labor-intensive, error-prone, and routine tasks; reduce risk, for example, by ensuring candidates fulfill “ideal” criteria that consider factors such as age, ethnicity, and gender; ensure fairness and equity in development, promotion, and compensation; and much more. I encourage you to continue your quest for knowledge and truth!


1 https://www.responsible.ai/
2Gallup. (2018). Optimism and Anxiety, Views on the Impact of Artificial Intelligence and Higher Education’s Response. Retrieved From https://www.northeastern.edu/gallup/pdf/OptimismAnxietyNortheasternGallup.pdf.
3Stella Lowry and Gordon Macpherson, “A blot on the profession,” British Medical Journal, March 1988, Volume 296, Number 623, pp. 657–658.

Frank Ginac
+ posts

Frank Ginac’s career spans 35 years of building world-class enterprise software. A hands-on leader, Frank is the chief software architect of TalentGuard’s award-winning software suite and leads the team that develops the company’s innovative solutions. At TalentGuard, Frank blends his passion for employee development and his breadth and depth of experience building complex software systems for global deployment to help create the leading career pathing and talent management solution in the market today. He is the author of two books, including Building High-Performance Software Development Teams and Customer-Oriented Software Quality Assurance, and the author of numerous articles. Frank holds a BS in Computer Science from Fitchburg State University and an MS in Computer Science from the Georgia Institute of Technology specializing in Interactive Intelligence (the branch of Artificial Intelligence focused on creating intelligent and adaptive systems that interact with humans on their terms) and now serves on the faculty as an Instructional Associate supporting the Institute’s graduate-level programs in Computer Science.

Related Articles

Join the world’s largest community of HR information management professionals.

Scroll to Top