Algorithms vs. Humans: Who is more biased?
An analysis towards Amazon's Artificial Intelligence Recruiting Tool and the ethical questions it raised due to the occurrence of unexpected gender biases.
As Ken Goldberg once said, “ We’re fascinated with robots because they are reflections of ourselves”. As everyone comes from a different background and upbringing, human bias is a recurring and complex issue that cannot be easily resolved. On the other hand, machines are good at following instructions and have limited decision-making capabilities as they lack common sense. Therefore, the quality of data being inputted by humans ultimately determines how algorithms behave and respond to various situations.
As discussed in the Machine Learning and Human Bias video, interaction bias occurs in machine learning as humans have a tendency to expose machines to their own preferences. This is further reflected through the incident where Google Photos labeled black people as ‘gorillas’ and it was argued that the lack of diversity in technological companies contributed to selection bias (Guynn, 2015). Though machine learning does not entirely rely on humans, humans still play a critical role in ensuring that the data and other environmental factors are controlled to minimize bugs and errors in machines.
Though Amazon’s Artificial Intelligence Recruiting Tool had an excellent value proposition, the data it worked with led to its downfall. The Amazon computer models were “trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period”, where most applicants came from males (Dastin, 2018). This inevitably caused the computer models to train themselves into favoring male applicants over female applicants. The main issue was that the computer models were exposed more towards male-specific qualifications and keywords and thus, programmed themselves into thinking that males were stronger candidates. This gender bias is also a “reflection of the male dominance across the tech industry — therefore the data fed to the model was not unbiased towards gender equality but au contraire” (Iriondo, 2018).