What if artificial intelligence is biased?
Artificial Intelligence, which promises to eliminate the inefficiencies that come from subjective human assessments and assumptions, is facing heightened concerns that the technology includes hidden biases.
Amazon.com, for instance, early last year stopped using a recruiting and hiring tool powered by AI because it was biased against women.
The tool observed patterns in resumes of successful hires over a 10-year period to rate the resumes of prospective hires, according to Reuters. Because many of the hires were men, it showed a preference for male candidates and favored resumes using more masculine terms. Resumes including the word “women’s” as in “women’s chess club captain” and candidates who attended all-women’s colleges were downgraded.
Reuters noted that even though the algorithms were edited to mitigate those biases, the tool was scrapped because Amazon wasn’t sure if it wouldn’t devise other ways to be discriminatory.
A number of other studies, including one showing a similar tendency in LinkedIn’s search engine, have likewise found biases for groups underrepresented in AI datasets. Google was criticized a few years ago after its image recognition algorithm identified African Americans as “gorillas.”
“AI algorithms are not inherently biased,” Venkatesh Saligrama, a professor at Boston University’s Department of Electrical and Computer Engineering who has studied word-embedding algorithms, told PC Magazine. “They have deterministic functionality and will pick up any tendencies that already exist in the data they train on.”
Because deep-learning software perceives patterns in human decision making, AI algorithms can also pick covert or overt biases from their human creators when are written.
The feedback loop from a machine learning system, particularly as humans increasingly rely on their AI’s assessments, could create more biased data that algorithms will then analyze and train on in the future.
While programmers seek to reduce prejudices, calls for diversity are being heard in the technology sector where the algorithms originate as well as greater transparency and accountability.
AI-bias is a significant concern in the health care and law enforcement sectors as well. It’s less known how any covert biases in algorithms would impact the personalized experiences AI promises to deliver for retailers.
- Artificial Intelligence Has a Bias Problem, and It’s Our Fault – PC Magazine
- A.I. has a bias problem that needs to be fixed: World Economic Forum – CNBC
- Amazon scraps secret AI recruiting tool that showed bias against women – Reuters
- When Robots Turn Racist: Artificial Intelligence Not Immune to Bias – The Globe Post
- AI is the future of hiring, but it’s far from immune to bias – Quartz
- IBM AI OpenScale: Operate and automate AI with trust – IBM
- Discriminating algorithms: 5 times AI showed prejudice – New Scientist
DISCUSSION QUESTIONS: How big a concern are latent biases embedded in artificial technology? Are retailers using artificial intelligence equipped to uncover and fix these biases?