Data Insight offers 95% off all modules and courses. Get additional 50% off with promo code: 50percentoff

datacamp_data_scientist_track.png

Algorithms vs. Humans: Who is more biased?


An analysis towards Amazon's Artificial Intelligence Recruiting Tool and the ethical questions it raised due to the occurrence of unexpected gender biases.



As Ken Goldberg once said, “ We’re fascinated with robots because they are reflections of ourselves”. As everyone comes from a different background and upbringing, human bias is a recurring and complex issue that cannot be easily resolved. On the other hand, machines are good at following instructions and have limited decision-making capabilities as they lack common sense. Therefore, the quality of data being inputted by humans ultimately determines how algorithms behave and respond to various situations.




As discussed in the Machine Learning and Human Bias video, interaction bias occurs in machine learning as humans have a tendency to expose machines to their own preferences. This is further reflected through the incident where Google Photos labeled black people as ‘gorillas’ and it was argued that the lack of diversity in technological companies contributed to selection bias (Guynn, 2015). Though machine learning does not entirely rely on humans, humans still play a critical role in ensuring that the data and other environmental factors are controlled to minimize bugs and errors in machines.



Though Amazon’s Artificial Intelligence Recruiting Tool had an excellent value proposition, the data it worked with led to its downfall. The Amazon computer models were “trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period”, where most applicants came from males (Dastin, 2018). This inevitably caused the computer models to train themselves into favoring male applicants over female applicants. The main issue was that the computer models were exposed more towards male-specific qualifications and keywords and thus, programmed themselves into thinking that males were stronger candidates. This gender bias is also a “reflection of the male dominance across the tech industry — therefore the data fed to the model was not unbiased towards gender equality but au contraire” (Iriondo, 2018).



As explained by Nihar Shah, we still have a long way to ensure that algorithms are fair, interpretable and explainable. Similar to what the Chief Justice of the US Supreme Court mentioned, the way to stop discrimination on the basis of gender is to stop discriminating based on gender (Loftus, 2018). Though Google had edited the program to make it gender neutral, the unpredictability and complexity behind machine learning is still not fully understood by programmers themselves. To prevent this issue from escalating, it is understandable why Google had scrapped the program and used it instead for mundane tasks such as, “culling duplicate candidate profiles from databases” (Dastin, 2018).



As discussed in The Second Machine Age, technological advancements are improving at an exponential rate, where innovations, such as self-driving cars, which were once deemed infeasible, are actually becoming a reality. In my opinion, Amazon should have further monitored the AI Recruiting Tool and conduct test runs to ensure that it was inclusive when selecting potential candidates. The company should still have professional recruiters to make hiring decisions as “the technology is just not ready yet” (Dastin, 2018).



However, the AI Recruiting Tool could still be a supporting device where it learns the qualities of ideal candidates through the decisions of professional recruiters. Recruiters should then provide sample ideal resumes that are diverse and train the recruiting tool to recognize those traits. However, gender bias will still persist if the recruiters themselves do not “disassociate themselves from general stereotypes and focus on the candidate” (Fogarty, 2014).



Therefore, it is important for Amazon to spread awareness of the importance of gender equality and select candidates strictly based on their qualifications. Otherwise, this would be a never-ending cycle where our own biases and shortcomings negatively affect the decision-making qualities and perspectives of machines.





References


1. Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters, Thomson Reuters, 10 Oct. 2018,


2. Fogarty, Kevin. “STEM Study Shows Hiring Managers Favor Men Over Women.” EETimes, EE Times, 28 Mar. 2014,


3. Guynn, Jessica. “Google Photos Labeled Black People 'Gorillas'.” USA Today, Gannett Satellite Information Network, 1 July 2015,


4. Iriondo, Roberto. “Amazon Scraps Secret AI Recruiting Engine That Showed Biases Against Women.” Medium.com, Medium, 11 Oct. 2018,

5. Loftus, Joshua. “Algorithmic Fairness Is as Hard as Causation.” Joshua Loftus, 23 Feb. 2018,


6. “Machine Learning and Human Bias - Inclusive ML.” Coursera, Rice University,


wix_createsite.png

Donate to Data Insight.

It will help us to continue to produce free and valuable data science contents.

Python Machine Learning & Data Science Recipes: Learn by Coding - End to End Python Machine Learning Recipes & Crash Course in Jupyter Notebook for Beginners and Business Students & Graduates.

ClickFunnel_FreeSummit.png
datainsight_techwrite.jpg
poptin_edited.png
SEOClerk_edited.png
  • Facebook
  • YouTube Social  Icon
  • Instagram
  • Pinterest
  • LinkedIn
Stay Connected with Data Insight

Write, Share, & Earn on Data Insight! Learn More

wix_barner.png

Copyright © 2019 Data Insight | All rights reserved | Donate

  • Facebook Social Icon
  • Instagram
  • LinkedIn
  • Pinterest
  • YouTube