As technology delivers new and better ways to work, live, and create, the uglier and more concerning parts of the world need attention.
Workplace discrimination of any kind is a problem that needs addressing. At best, it’s a fast, but inefficient way to filter out a possibly large of applicants, but in an age where more efficient and productive tools are available.
At worst, skilled and talented workers are shut out from places where they would make a positive impact. It’s not just the employees who miss out; a company that misses valuable talent based on the wrong choices is at a net loss.
Here are a few ways that Artificial Intelligence (AI) can improve the future of fair hiring practices using ageism as an example.
Ageism, Like Any Discrimination, Still Happens
Just because it’s against the law doesn’t mean it can’t happen. Specific laws that make discrimination illegal are only helpful on the most blatant examples, or if a case of discrimination can be proven. While blatant examples exist, the more subtle issues are more rampant.
Multiple forms of discrimination have major problems and micro-aggressions that are still accepted–or not consistently corrected–by society. Ageism is unfortunately considered conventional wisdom in many situations.
Here are a few phrases and concepts that could highlight ageism or other discrimination:
- Recent college graduate: while it’s never too late for college–coincidentally, another age-related phrase–graduates are more likely to be younger and more likely to take lower pay for more work.
- Six to ten years of experience: the range varies, but a top limit on expected experience can be used to avoid older applicants.
- Cultural fit: a wildcard that is falling out of use, a cultural fit could mean working well with the office environment.
Data Sets Can Help, But Still Need Oversight
Discrimination often comes down to assumptions, and assumptions about artificial intelligence can lead to more problems. AI can be the answer, but it requires skilled oversight and transparency.
One idea is that artificial intelligence could choose applicants without using human bias. It’s the concept of a non-human machine that is free from bigotry and able to select candidates based on merit or other factors alone.
As long as there is human interaction, AI will always have a human bias. AI decisions are based on training from data sets–groups of data that help the AI make choices and fine-tune those choices. You can think of it as a series of practice tests and workbook problems. The AI will make rapid decisions and fine-tune those decisions, but it will always be based on those original data sets.
There may not be a perfect way to eliminate all bias, but companies that truly want to do the right thing can come as close as possible. By working with a team that shares its data experience and process with clients, you can confirm or even audit training.
There are security precautions to keep in mind. Data set poisoning–a way to hack AI by corrupting the data sets with slightly-altered or completely fake data–can ruin AI decisions. A mixture of good security and good client-to-engineer rapport can make a difference. To learn more about using AI and secure data sets to throw discrimination out of the hiring process.
Photo by Luke Chesser on Unsplash.