{"id":22689,"date":"2021-03-17T03:56:00","date_gmt":"2021-03-17T03:56:00","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/chronicles-of-ai-ethics-the-man-the-machine-and-the-black-box\/"},"modified":"2023-08-29T12:58:36","modified_gmt":"2023-08-29T12:58:36","slug":"chronicles-of-ai-ethics-the-man-the-machine-and-the-black-box","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/chronicles-of-ai-ethics-the-man-the-machine-and-the-black-box\/","title":{"rendered":"The Chronicles Of AI Ethics: The Man, The Machine, And The Black Box"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"22689\" class=\"elementor elementor-22689\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-f4f5405 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f4f5405\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-ae9c937\" data-id=\"ae9c937\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9887a53 elementor-widget elementor-widget-text-editor\" data-id=\"9887a53\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Today, machine learning and artificial intelligence systems, trained by data, have become so effective that many of the largest and most well-respected companies in the world use them almost exclusively to make mission-critical business decisions. The outcome of a loan, insurance or job application, or the detection of fraudulent activity is now determined using processes that involve no human involvement whatsoever.<\/p><p>In a past life, I worked on machine learning infrastructure at Uber.&nbsp; From estimating ETAs to dynamic pricing and even matching riders with drivers, Uber relies on machine learning and artificial intelligence to enhance customer happiness and increase driver satisfaction. Frankly, without machine learning, I question whether Uber would exist as we know it today.<\/p>\n<p>For data-driven businesses, there is no doubt that machine learning and artificial intelligence are enduring technologies that are now table stakes in business operations, not differentiating factors.<\/p>\n<p>While machine learning models aim to mirror and predict real-life as closely as possible, they are not without their challenges. Household name brands like Amazon, Apple, Facebook, Google have been accused of <a href=\"https:\/\/www.brookings.edu\/research\/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms\/\" target=\"_blank\" rel=\"noreferrer noopener\">algorithmic bias<\/a>, thus affecting society at large.<\/p>\n<p>For instance, Apple famously ran into an AI bias storm when it introduced the Apple Card and users noticed that it was offering smaller lines of credit to women than to men.<\/p>\n<p>In more extreme and troubling cases, judicial systems in the U.S. are using AI systems to inform prison sentencing and parole terms despite the fact that these systems are built on historically biased crime data, amplifying and perpetuating embedded systemic biases and calling into question algorithmic fairness in the criminal justice system.<\/p>\n<p>In the wake of the Apple Card controversy, Apple\u2019s issuing partner, Goldman Sachs, defended its credit limit decisions by noting that its algorithm had been vetted by a third-party and that gender was not used as an input or determining factor.<\/p>\n<p>While applicants were not asked for gender when applying for the Apple Card, women were nonetheless receiving smaller credit limits, underscoring a troubling truth:<strong>&nbsp;machine learning systems can often develop biases even when a protected class variable is absent<\/strong>.<\/p>\n<p>Data science and AI\/ML teams today don\u2019t match protected class information back to model data for plausible deniability.<em>&nbsp;If I didn\u2019t use the data, machines can\u2019t be making decisions on it, right?&nbsp;<\/em>In reality, many variables can be correlated with gender, race or other aspects of identity and, in turn, lead to decision-making that does not offer equal opportunity to all people.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-909a720 elementor-widget elementor-widget-heading\" data-id=\"909a720\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">The Imbalance of Responsibility<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c2da583 elementor-widget elementor-widget-text-editor\" data-id=\"c2da583\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We are living in an era where major technological advances are imperfectly regulated and effectively shielded from social responsibility, while their users face major repercussions.<\/p>\n<p>We come face to face with what M.C. Eilish coined, \u201cThe Moral Crumple Zone\u201d. This zone represents the diffusion of responsibility onto the user instead of the system as a whole. Just as a car\u2019s hood takes the brunt of the impact in a head on collision, the user of technology takes the impact for the mistakes of the ML system. For example: as it stands, if a car with self-driving capabilities fails to recognize a stop sign, the driver is responsible for any mistakes and subsequent damages that the car makes, not those who trained the models and produced the car.<\/p>\n<p>To make matters worse, the users of most technology very rarely have a full understanding of how the technology works and its broader impact on society. It is unfair to expect users to make the right risk management decisions with minimal understanding of how these systems even work.<\/p>\n<p>These effects are magnified when talking about users in underrepresented and disadvantaged communities. People from these groups have a much harder time managing unforeseen risk and defending themselves from potentially damaging outcomes. This is especially damaging if an AI system makes decisions with limited data on these populations &#8211; which is why topics like facial recognition technology for law enforcement are particularly contentious. Turning a blind eye is no longer an option given the social stakes.<\/p>\n<p>Those who intentionally built these complex models must consider their ethical responsibilities in doing so as our world has lasting structural consequences that do not resolve by themselves.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0eaead2 elementor-widget elementor-widget-heading\" data-id=\"0eaead2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Rise Up Or Shut Up: Taking Accountability.<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-189b535 elementor-widget elementor-widget-text-editor\" data-id=\"189b535\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We live in a society that manages its own risks through establishing ethical frameworks, creating acceptable codes of conduct, and in the end: codifying these beliefs with legislation. When it comes to ML systems, we are way behind here. We are barely starting to talk about the ethical foundations of ML, and as a result our society is going to have to pay the price for our slow action.<\/p>\n<p>We must work harder to understand how machine learning models are making their decisions and how we can improve this decision making to avoid societal catastrophe.<\/p>\n<p>So, what steps do we need to take&nbsp;<strong>now<\/strong>&nbsp;to start tackling the problem?<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ff7c479 elementor-widget elementor-widget-heading\" data-id=\"ff7c479\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h5 class=\"elementor-heading-title elementor-size-default\"><em>STEP 1: Admit that proper ethical validation is mission-critical to the success of our rapidly growing technology.<\/em><\/h5>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-36de588 elementor-widget elementor-widget-text-editor\" data-id=\"36de588\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The first step in exposing and improving how AI\/ML affects us as a society is to better understand complex models and validate ethical practices. It is no okay to longer avoid the problem and claim ignorance.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-90c58cc elementor-widget elementor-widget-heading\" data-id=\"90c58cc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h5 class=\"elementor-heading-title elementor-size-default\"><em>STEP 2: Make protected class data available to modelers<\/em><\/h5>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4f107c7 elementor-widget elementor-widget-text-editor\" data-id=\"4f107c7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Contrary to current practices which excluded protected class data from models to allow for plausible deniability in the case of biased outcomes, protected class data should in fact be available to modelers and included in data sets that inform <a href=\"https:\/\/www.experfy.com\/blog\/ai-ml\/what-to-tell-your-board-about-ai-ml\/\" target=\"_blank\" rel=\"noreferrer noopener\">ML\/AI<\/a> models. The ability to test against this data puts the onus on these modelers to make certain their outputs aren\u2019t biased.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-34e0d45 elementor-widget elementor-widget-heading\" data-id=\"34e0d45\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h5 class=\"elementor-heading-title elementor-size-default\"><em>STEP 3: Break down barriers between model builders and the protected class data<\/em><\/h5>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-91633f4 elementor-widget elementor-widget-text-editor\" data-id=\"91633f4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Bias problems and analysis are not only the purview of model validation teams. Putting a wall between teams and data only diffuses responsibility. The team&#8217;s building model needs this responsibility and needs the data to make those decisions.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-51de443 elementor-widget elementor-widget-heading\" data-id=\"51de443\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h5 class=\"elementor-heading-title elementor-size-default\"><em>STEP 4: Employ emerging technologies such as ML observability that enable accountability<\/em><\/h5>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d7ae3ca elementor-widget elementor-widget-text-editor\" data-id=\"d7ae3ca\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>You can\u2019t change what you don\u2019t measure. Businesses and organizations need to proactively seek tools and solutions that help them better monitor, troubleshoot, and explain what their technology is doing. And subsequently, uncover ways to improve the systems they\u2019ve built.<\/p>\n<p>Ultimately, the problem of the black box is growing as <a href=\"https:\/\/www.experfy.com\/hire\/ai-machine-learning\">AI\/ML technologies<\/a> are becoming more advanced, yet we have little idea of how most of these systems truly work. As we give our technology more and more responsibility, the importance of making ethically charged decisions in our model building is amplified exponentially. It all boils down to really understanding our creations. If we don\u2019t know what is happening in the black box, we can\u2019t fix its mistakes to make a better model and a better world.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>We are barely starting to talk about the ethical foundations of ML, and as a result our society is going to have to pay the price for our slow action.<\/p>\n","protected":false},"author":1083,"featured_media":18951,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97,508,1135,92],"ppma_author":[3896],"class_list":["post-22689","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence","tag-black-box","tag-ethics","tag-machine-learning"],"authors":[{"term_id":3896,"user_id":1083,"is_guest":0,"slug":"aparna-dhinakaran","display_name":"Aparna Dhinakaran","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/Aparna-Dhinakaran-150x150.jpeg","user_url":"http:\/\/www.arize.com","last_name":"Dhinakaran","first_name":"Aparna","job_title":"","description":"Aparna Dhinakaran is Co-Founder and Chief Product Officer at Arize AI, a startup focused on ML Observability. She was previously an ML engineer at Uber, Apple, and Tubemogul (acquired by Adobe). She is also Forbes AI Columnist."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22689","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/1083"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=22689"}],"version-history":[{"count":5,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22689\/revisions"}],"predecessor-version":[{"id":31884,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22689\/revisions\/31884"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/18951"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=22689"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=22689"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=22689"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=22689"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}