{"id":2091,"date":"2019-11-25T02:37:39","date_gmt":"2019-11-25T02:37:39","guid":{"rendered":"http:\/\/kusuaks7\/?p=1696"},"modified":"2024-02-22T04:56:59","modified_gmt":"2024-02-22T04:56:59","slug":"how-bias-distorts-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/how-bias-distorts-ai-artificial-intelligence\/","title":{"rendered":"How Bias Distorts AI (Artificial Intelligence)"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"2091\" class=\"elementor elementor-2091\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-3681dfb elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"3681dfb\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6db16d43\" data-id=\"6db16d43\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1c2a8a26 elementor-widget elementor-widget-text-editor\" data-id=\"1c2a8a26\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tWhen it comes to AI (Artificial Intelligence), there&#8217;s usually a major focus on using large datasets, which allow for the training of models. But there\u2019s a nagging issue: bias. What may seem like a robust dataset could instead be highly skewed, such as in terms of race, wealth and gender.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-01efbfc elementor-widget elementor-widget-text-editor\" data-id=\"01efbfc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThen what can be done? Well, to help answer this question, I reached out to Dr. Rebecca Parsons, who is the Chief Technology Officer of\u00a0<a href=\"https:\/\/www.thoughtworks.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" data-ga-track=\"ExternalLink:https:\/\/www.thoughtworks.com\/\">ThoughtWorks<\/a>, a global technology company with over 6,000 employees in 14 countries. She has a strong background in both the business and academic worlds of AI.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cad86f7 elementor-widget elementor-widget-text-editor\" data-id=\"cad86f7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tSo here&#8217;s a look at what she has to say about bias:\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-57747d3 elementor-widget elementor-widget-heading\" data-id=\"57747d3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><em>Can you share some real-life examples of bias in AI systems and explain how it gets there?<\/em><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3976bf4 elementor-widget elementor-widget-text-editor\" data-id=\"3976bf4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIt\u2019s a common misconception that the developers who are responsible for infusing bias in AI systems are either prejudiced or acting out of malice\u2014but in reality, bias is more often unintentional and unconscious in nature. AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. The problem is compounded by the fact that the teams responsible for the development, training, and deployment of AI systems are largely not representative of the society at large. According to a recent research report from NYU, women comprise only 10% of AI research staff at Google and only 2.5% of Google\u2019s workforce is black. This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-167996f elementor-widget elementor-widget-text-editor\" data-id=\"167996f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tOne example that demonstrates this point well is the use of voice assistants like Siri or Alexa that are trained on huge databases of recorded speech that are unfortunately dominated by speech from white, upper-middle class Americans\u2014making it challenging for the technology to understand commands from people outside that category. Additionally, studies have shown that algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-12f5427 elementor-widget elementor-widget-heading\" data-id=\"12f5427\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3 aria-hidden=\"true\" data-nosnippet=\"true\"><em>How do you detect bias in AI and guard against it?<\/em><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9a72533 elementor-widget elementor-widget-text-editor\" data-id=\"9a72533\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-97766e3 elementor-widget elementor-widget-text-editor\" data-id=\"97766e3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe Algorithmic Justice League has also done interesting work on how to correct biased algorithms. They ran tests on facial recognition programs to see if they could accurately determine race and gender. Interestingly, lighter-skinned males were almost always correctly identified, while 35% of darker-skinned females were misidentified. The reason? One of the most widely used facial-recognition data training sets was estimated to be more than 75% male and more than 80% white. Because the categorization had far more definitive and distinguishing categories for white males than it did for any other race, it was biased in correctly identifying these individuals over others. In this instance, the fix was quite easy. Programmers added more faces to the training data and the results quickly improved.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f38961f elementor-widget elementor-widget-text-editor\" data-id=\"f38961f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tWhile AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.\u00a0\u00a0This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it\u2019s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-296bd09 elementor-widget elementor-widget-heading\" data-id=\"296bd09\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><em>What could be the consequences to the AI industry if bias is not dealt with properly?<\/em><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-57f9bb6 elementor-widget elementor-widget-text-editor\" data-id=\"57f9bb6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAs detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient\u2019s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ce59db1 elementor-widget elementor-widget-text-editor\" data-id=\"ce59db1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAdditionally, the \u201cwitness\u201d in many of these incidents cannot even be cross-examined since it\u2019s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court&#8217;s ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place. These are all important discussions that must be had as AI continues to transform the world we live in.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1895db0 elementor-widget elementor-widget-text-editor\" data-id=\"1895db0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIf we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It\u2019s important we act now to mitigate the spread of biased or inaccurate technologies.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>AI systems and algorithms are created by people with their own experiences, backgrounds and blind spots which can unfortunately lead to the development of fundamentally biased systems. If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences. It&rsquo;s important we act now to mitigate the spread of biased or inaccurate technologies. Then what can be done?<\/p>\n","protected":false},"author":667,"featured_media":2850,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[3440],"class_list":["post-2091","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":3440,"user_id":667,"is_guest":0,"slug":"tom-taulli","display_name":"Tom Taulli","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_63f5f357-f6ae-4999-b3f4-ed30406eb86c-150x150.jpg","user_url":"http:\/\/www.taulli.com\/","last_name":"Taulli","first_name":"Tom","job_title":"","description":"Tom Taulli is an author, speaker, and advisor \u2013 with a focus primarily on technology. He has co-founded a variety of\u00a0companies, including Hypermart.net (sold to InfoSpace), WebIPO and BizEquity. He is also a contributor to Forbes.com, Bloomberg.com, Kiplinger and BusinessWeek. \u00a0He has also written a variety of books, mostly on tech and finance.\u00a0 His latest is\u00a0<a href=\"http:\/\/amzn.to\/2Fdl3uD\">Artificial Intelligence Basics: A Non-Technical Introduction<\/a>. \u00a0As of now, he is an advisor to some awesome startups."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2091","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/667"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=2091"}],"version-history":[{"count":5,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2091\/revisions"}],"predecessor-version":[{"id":36079,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2091\/revisions\/36079"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/2850"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=2091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=2091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=2091"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=2091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}