{"id":8674,"date":"2020-06-24T07:46:03","date_gmt":"2020-06-24T07:46:03","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/?p=8674"},"modified":"2023-12-01T15:53:12","modified_gmt":"2023-12-01T15:53:12","slug":"what-makes-ai-algorithms-dangerous","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/what-makes-ai-algorithms-dangerous\/","title":{"rendered":"What Makes AI Algorithms Dangerous?"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"8674\" class=\"elementor elementor-8674\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-5e2413ce elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5e2413ce\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-1ae88117\" data-id=\"1ae88117\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-758f034b elementor-widget elementor-widget-text-editor\" data-id=\"758f034b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p>When discussing the threats of artificial intelligence, the first thing that comes to mind are images of\u00a0<em>Skynet<\/em>,\u00a0<em>The Matrix<\/em>, and the\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/07\/28\/future-of-artificial-intelligence-ai-apocalypse\/\" target=\"_blank\" rel=\"noreferrer noopener\">robot apocalypse<\/a>. The runner up is technological unemployment, the vision of a foreseeable future in which\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/03\/21\/artificial-intelligence-and-the-disruption-of-employment\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI algorithms take over all jobs<\/a>\u00a0and push humans into a struggle for meaningless survival in a world where human labor is no longer needed.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4af18aa elementor-widget elementor-widget-text-editor\" data-id=\"4af18aa\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p>Whether any or both of those threats are real is\u00a0<a href=\"https:\/\/bdtechtalks.com\/2020\/03\/16\/stuart-russell-human-compatible-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">hotly debated<\/a>\u00a0among scientists and thought leaders. But AI algorithms also pose more imminent threats that exist today, in ways that are less conspicuous and hardly understood.<\/p>\n\n\n\n<p>In her book,\u00a0<em><a href=\"https:\/\/www.amazon.com\/Weapons-Math-Destruction-Increases-Inequality\/dp\/0553418815\" target=\"_blank\" rel=\"noreferrer noopener\">Weapons of Math Destruction<\/a>: How Big Data Increases Inequality and Threatens Democracy<\/em>, mathematician Cathy O\u2019Neil explores how blindly trusting algorithms to make sensitive decisions can harm many people who are on the receiving end of those decisions.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-17cad80 elementor-widget elementor-widget-text-editor\" data-id=\"17cad80\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p>The dangers of AI algorithms can manifest themselves in\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/03\/26\/racist-sexist-ai-deep-learning-algorithms\/\" target=\"_blank\" rel=\"noreferrer noopener\">algorithmic bias<\/a>\u00a0and dangerous feedback loops, and they can expand to all sectors of daily life, from the economy to social interactions, to the criminal justice system.<\/p>\n\n\n\n<p>While the use of mathematics and algorithms in decision-making is nothing new, recent advances in deep learning and the proliferation of black-box AI systems amplify their effects, both good and bad. And if we do not understand the present threats of AI, we will not be able to benefit from its advantages.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2b01ece elementor-widget elementor-widget-heading\" data-id=\"2b01ece\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>The characteristics of dangerous AI algorithms<\/h2>\n<!-- \/wp:heading --><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-261bc03 elementor-widget elementor-widget-text-editor\" data-id=\"261bc03\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We use algorithms to model to understand and process many things. \u201cA model, after all, is nothing more than an abstract representation of some process, be it a baseball game, an oil company\u2019s supply chain, a foreign government\u2019s actions, or a movie theater\u2019s attendance,\u201d O\u2019Neil writes in\u00a0<em>Weapons of Math Destruction<\/em>. \u201cWhether it\u2019s running in a computer program or in our head, the model takes what we know and uses it to predict responses in various situations.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>But more and more of those models are being transferred from our heads to computers, thanks to advances in\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/02\/15\/what-is-deep-learning-neural-networks\/\" target=\"_blank\" rel=\"noreferrer noopener\">deep learning<\/a>\u00a0and the increased digitization of every aspect of our lives. Thanks to broadband internet, cloud computing, mobile devices, the\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/09\/27\/what-is-iot-internet-of-things\/\" target=\"_blank\" rel=\"noreferrer noopener\">internet of things (IoT)<\/a>, wearables, and a slew of other emerging technologies, we can collect and process more and more data about anything and everything.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bdf7ff6 elementor-widget elementor-widget-text-editor\" data-id=\"bdf7ff6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>This increased access to data and computing power has helped create AI algorithms that can automate an increasing number of tasks.\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/08\/05\/what-is-artificial-neural-network-ann\/\" target=\"_blank\" rel=\"noreferrer noopener\">Deep neural networks<\/a>, which had previously been limited to research laboratories, have found their way into many areas that were previously challenging for computers, such as\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/01\/14\/what-is-computer-vision\/\" target=\"_blank\" rel=\"noreferrer noopener\">computer vision<\/a>, machine translation, speech and facial recognition.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>So far, so good. What can go wrong?<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>In\u00a0<em>Weapons of Math Destruction<\/em>, O\u2019Neil specifies three factors that make AI models dangerous: opacity, scale, and damage.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-480611a elementor-widget elementor-widget-heading\" data-id=\"480611a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>Algorithmic vs corporate opacity<\/h2>\n<!-- \/wp:heading --><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6a4a2b3 elementor-widget elementor-widget-image\" data-id=\"6a4a2b3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/07\/window-rain-transparency.jpg?resize=696%2C464&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f850642 elementor-widget elementor-widget-text-editor\" data-id=\"f850642\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<!-- wp:paragraph -->\n<p>There are two aspects to the opacity of AI systems: technical and corporate. The technical opacity, also referred to as\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/09\/25\/explainable-interpretable-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">the black-box problem of artificial intelligence<\/a>, has received much attention in the past few years.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>In a nutshell, the question is, how do we know an AI algorithm is making the right decision? This question is becoming more critical as AI finds its way into loan application processing, credit scoring, teacher rating, recidivism prediction, and many other sensitive fields.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Many media outlets have published articles that depict AI algorithms as mysterious machines whose behavior is unknown even to their developers. But contrary to what the media portrays, not\u00a0<em>all\u00a0<\/em>AI algorithms are opaque.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-037a939 elementor-widget elementor-widget-text-editor\" data-id=\"037a939\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Traditional software, often referred to as\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/11\/18\/what-is-symbolic-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">symbolic artificial intelligence<\/a>\u00a0in AI jargon, are known for their interpretable and transparent nature. They are composed of hand-coded rules, meticulously put together by software developers and domain experts. They can be probed and audited, and an error can be traced to the line of code where it has occurred.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>In contrast,\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/08\/28\/artificial-intelligence-machine-learning-deep-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">machine learning algorithms<\/a>, which have become increasingly popular in recent years, develop their behavior by analyzing many training examples and creating statistical inference models. This means that the developers don\u2019t necessarily have the final say on how the AI algorithms behave.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4268277 elementor-widget elementor-widget-text-editor\" data-id=\"4268277\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<!-- wp:paragraph -->\n<p>But again, not all <a href=\"https:\/\/www.experfy.com\/blog\/take-your-machine-learning-models-to-production-with-these-5-simple-steps\/\" target=\"_blank\" rel=\"noreferrer noopener\">machine learning models<\/a> are opaque. For instance, decision trees and linear regression models, two popular machine learning algorithms, will give clear explanations of the factors that determine their decisions. If you train a decision tree algorithm to process loan applications, it can provide you with a tree-like breakdown (thus the name) of how it decides which loan applications to confirm and which to reject. This provides developers with a chance to discover potentially problematic factors and correct the model.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-e963fa8 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"e963fa8\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-9200eae\" data-id=\"9200eae\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0804d3c elementor-widget elementor-widget-image\" data-id=\"0804d3c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2020\/06\/loan-application-decision-tree.jpeg?resize=644%2C322&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0f19fb1 elementor-widget elementor-widget-text-editor\" data-id=\"0f19fb1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tA decision tree provides a detailed breakdown of its decision process (Source: <a href=\"https:\/\/medium.com\/@fenjiro\/data-mining-for-banking-loan-approval-use-case-e7c2bc3ece3\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Medium<\/a>)\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e224a2b elementor-widget elementor-widget-text-editor\" data-id=\"e224a2b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>But deep neural networks, which have become very popular in the past few years, are especially bad at revealing how they work. They are composed of layers upon layers of artificial neurons, small mathematical functions that tune their parameters to the thousands of examples they see during training. In many cases, it\u2019s very hard to probe deep learning models and determine which factors contribute to their decision-making processes.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>A loan application processing deep learning algorithm is an end-to-end model where a loan application goes in and a final verdict comes out. There\u2019s not a feature-by-feature breakdown on how the AI algorithm is making decisions. In most cases, a well-trained deep learning model will perform better than its less-sophisticated siblings (decision trees, support vector machines, linear regression, etc.), and it might even spot relevant patterns that will go unnoticed to human experts.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-025eaa1 elementor-widget elementor-widget-text-editor\" data-id=\"025eaa1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>However, even the most accurate deep learning systems make errors every once in a while, and when they do, it will be very hard to determine what went wrong. But a deep learning system doesn\u2019t need to make errors before its opacity turns problematic. Suppose an angry customer wants to know why an AI application has turned down their loan application. When you have an interpretable AI system, you\u2019ll be able to provide a clear explanation of the steps that went into the decision. When you have an opaque system, you can just shrug and say, \u201cThe computer said so.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>But while the technical opacity of artificial intelligence algorithms has received a lot of attention in tech media, what\u2019s less discussed is the opaque ways companies use their algorithms, even when the algorithms themselves are trivial and interpretable.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>\u201cEven if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible?\u201d O\u2019Neil questions in\u00a0<em>Weapons of Math Destruction.<\/em><\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-31fcc7a elementor-widget elementor-widget-text-editor\" data-id=\"31fcc7a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<!-- wp:paragraph -->\n<p>Companies that view their AI algorithms as corporate secrets try their best to hide them behind walled gardens to keep the edge over their competitors. We don\u2019t know much about the AI algorithm powering Google Search, the models that define our friend suggestions on Facebook, populate our feed on Twitter, among others.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Some of this secrecy is justified. For instance, if Google published the inner-workings of its search algorithm, then it would become vulnerable to gaming. In fact, even without Google revealing much detail about its search algorithm, there\u2019s an entire industry poised to find shortcuts to Google Search top-ranking positions. Algorithms are, after all, mindless machines that play by their own rules. They don\u2019t use common sense judgment to identify evil actors who twist the rules for devious intentions.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>But staying on the same example, without transparency, how can we make sure that Google itself is not manipulating search results to serve its own political goals and economic interests? In 2018, U.S. President Donald Trump accused Google of\u00a0<a href=\"https:\/\/www.nytimes.com\/2018\/08\/28\/business\/media\/google-trump-news-results.html\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">burying conservative news outlets<\/a>\u00a0in its search results and favoring liberal media. The claim put Google on the defensive, and the company\u2019s spokespersons could only promise that they would do no such thing.<\/p>\n<!-- \/wp:paragraph -->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-45e810e elementor-widget elementor-widget-text-editor\" data-id=\"45e810e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>This only shows the fine line that organizations walk on when they use AI algorithms. When AI systems are not transparent, they don\u2019t even need to make errors to wreak havoc. Even the shadow of a doubt about the system\u2019s performance can be enough to cause mistrust in the system. On the other hand, too much transparency can also backfire and lead to other disastrous results.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>O\u2019Neil wrote\u00a0<em>Weapons of Math Destruction\u00a0<\/em>in 2016, before rules like\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/05\/21\/gdpr-data-artificial-intelligence-regulation\/\" target=\"_blank\" rel=\"noreferrer noopener\">GDPR<\/a>\u00a0and CCPA came into effect. Those regulations require companies to be transparent about the use of AI algorithms and allow users to investigate the decision process behind their automation systems. Other developments, such as the\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/04\/15\/trustworthy-ethical-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">ethical AI rules of the European Commission<\/a>, also incentivize transparency.<\/p>\n<!-- \/wp:paragraph -->\n<!-- wp:paragraph -->\n<p>Much progress has been made to address the technical, ethical, and legal issues surrounding AI transparency, a lot more still needs to be done. As regulators pass new laws to regulate corporate secrecy, corporations find new ways to circumvent those rules without finding themselves in hot water, such as very long Terms of Service dialogs that inconspicuously deprive you of your right to algorithmic transparency.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9b76a1d elementor-widget elementor-widget-heading\" data-id=\"9b76a1d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>Who bears the damage of AI algorithms?<\/h2>\n<!-- \/wp:heading --><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-9f37e12 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"9f37e12\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-d3539d0\" data-id=\"d3539d0\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6d54b21 elementor-widget elementor-widget-image\" data-id=\"6d54b21\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/12\/cogs-automation.jpg?resize=696%2C464&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7f37961 elementor-widget elementor-widget-text-editor\" data-id=\"7f37961\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t What Makes AI Algorithms Dangerous? \t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c178da8 elementor-widget elementor-widget-text-editor\" data-id=\"c178da8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<!-- wp:paragraph -->\n<p>\u201cDoes the model work against the subject\u2019s interest? In short, is it unfair? Does it damage or destroy lives?\u201d O\u2019Neil posits in\u00a0<em>Weapons of Math Destruction<\/em>.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>There are plenty of examples of AI algorithms making dumb shopping suggestions, misclassifying images, and doing other silly things. But as AI models become more and more ingrained in our lives, their errors are moving from benign to destructive.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>In her book, O\u2019Neil explores many cases where algorithms causing damage to people\u2019s lives. Examples include credit scoring systems that wrongfully penalize people, recidivism algorithms that give heavier sentences to defendants based on their race and ethnic backgrounds, teacher-scoring systems that end up firing well-performing teachers and rewarding cheaters, and trade algorithms that make billions of dollars at the expense of low-income classes.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>The impact of an algorithm, combined with its lack of transparency, lend to the creation of a dangerous AI system. For example, O\u2019Neil says, \u201cThe new recidivism models are complicated and mathematical. But embedded within these models are a host of assumptions, some of them prejudicial,\u201d and adds, \u201cthe workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>This basically means that an AI algorithm can decide to keep a person in jail based on their race, and the defendant has no way to find out why they were deemed as ineligible for pardon.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>There are two more factors that make the damage of dangerous AI algorithms even more harmful.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>First, the data. Machine learning algorithms rely on quality data for training and accuracy. If you want an image classifier to accurately detect pictures of cats, you must provide it with many labeled pictures of cats. Likewise, a loan-application algorithm would need lots of historical records of loan applications and their outcome (paid or defaulted).<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>The problem is, those who are hurt by AI algorithms are often the people on whom there\u2019s not enough quality data. This is why loan application processors provide better services to those who already have adequate access to banking and penalize the unbanked and underprivileged who have been largely deprived of the financial system.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>The second problem is the feedback loop. When an AI algorithm starts to make problematic decisions, its behavior generates more erroneous data, which is in turn used to further hone the algorithm, which causes even more prejudice, and the cycle continues endlessly.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>On the topic of policing, O\u2019Neil argues that prejudiced crime prediction causes more police presence in impoverished neighborhoods. \u201cThis creates a pernicious feedback loop,\u201d she writes. \u201cThe policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>When you create a bigger picture of how all these disparate-and-yet-interconnected AI systems feed into each other, you\u2019ll see how the real harm happens. Here\u2019s how O\u2019Neil summarizes the situation: \u201cPoor people are more likely to have bad credit and live in high-crime neighborhoods, surrounded by other poor people. Once the dark universe of WMDs digests that data, it showers them with predatory ads for subprime loans or for-profit schools. It sends more police to arrest them, and when they\u2019re convicted it sentences them to longer terms. This data feeds into other WMDs, which score the same people as high risks or easy targets and proceed to block them from jobs, while jacking up their rates for mortgages, car loans, and every kind of insurance imaginable. This drives their credit rating down further, creating nothing less than a death spiral of modeling. Being poor in a world of WMDs is getting more and more dangerous and expensive.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9de1b64 elementor-widget elementor-widget-heading\" data-id=\"9de1b64\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>The explosive scale of algorithmic harm<\/h2>\n<!-- \/wp:heading --><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2d5c68e elementor-widget elementor-widget-image\" data-id=\"2d5c68e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2016\/04\/6914441342_775b4ab9a7_o.jpg?resize=640%2C480&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e395507 elementor-widget elementor-widget-text-editor\" data-id=\"e395507\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t&#8220;Big Data What Makes AI Algorithms Dangerous?&#8221;\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5c510de elementor-widget elementor-widget-text-editor\" data-id=\"5c510de\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<!-- wp:paragraph -->\n<p>\u201cThe third question is whether a model has the capacity to grow exponentially. As a statistician would put it, can it scale?\u201d O\u2019Neil writes in\u00a0<em>Weapons of Math Destruction<\/em>.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Consider the Google Search example we discussed earlier. Billions of people use Google Search to find answers to important questions about health, politics, and social issues. A tiny mistake in Google\u2019s AI algorithm can have a massive impact on public opinion.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Likewise, Facebook\u2019s ranking algorithms decide the news that hundreds of millions of people see every day. If those algorithms are faulty, they can be gamed to spread fake, sensational news by malicious actors. Even when there\u2019s not a direct malicious intent, they can still cause harm. For instance, news feed algorithms that favor engaging content can\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/05\/20\/artificial-intelligence-filter-bubbles-news-bias\/\" target=\"_blank\" rel=\"noreferrer noopener\">amplify biases and create filter bubbles<\/a>, making users less tolerant of alternative views.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>When opaque and faulty Al algorithms determine credit scores for hundreds of millions of people or decide the fate of the country\u2019s education system, then you have all the elements of a weapon of math destruction.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>So, what should be done about this? We need to acknowledge the limits of the AI algorithms that we deploy. While having an automated system that relieves you of the duty of making tough decisions might seem tempting, you must understand when humans are on the receiving end of those decisions and how they are affected.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>\u201cBig Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that\u2019s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit,\u201d O\u2019Neil writes.<\/p>\n<!-- \/wp:paragraph -->\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>The dangers of AI algorithms can manifest themselves in algorithmic bias and dangerous feedback loops, and they can expand to all sectors of daily life, from the economy to social interactions, to the criminal justice system.<\/p>\n","protected":false},"author":109,"featured_media":8675,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[360,361,97],"ppma_author":[1946],"class_list":["post-8674","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-ai-algorithms","tag-algorithm-bias","tag-artificial-intelligence"],"authors":[{"term_id":1946,"user_id":109,"is_guest":0,"slug":"ben-dickson","display_name":"Ben Dickson","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_8aaf6bea-c4c1-455f-8156-8007d70910f8-150x150.jpg","user_url":"https:\/\/bdtechtalks.com\/","last_name":"Dickson","first_name":"Ben","job_title":"","description":"Ben Dickson is an experienced software engineer and tech blogger. He contributes regularly to major tech websites such as the Next Web, the Daily Dot, PCMag.com, Cointelegraph, VentureBeat, International Business Times UK, and The Huffington Post."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/8674","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=8674"}],"version-history":[{"count":5,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/8674\/revisions"}],"predecessor-version":[{"id":34612,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/8674\/revisions\/34612"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/8675"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=8674"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=8674"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=8674"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=8674"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}