{"id":2151,"date":"2019-12-23T04:41:58","date_gmt":"2019-12-23T04:41:58","guid":{"rendered":"http:\/\/kusuaks7\/?p=1756"},"modified":"2024-02-05T12:16:51","modified_gmt":"2024-02-05T12:16:51","slug":"why-we-should-be-careful-when-developing-ai","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/why-we-should-be-careful-when-developing-ai\/","title":{"rendered":"Why We Should Be Careful When Developing AI"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"2151\" class=\"elementor elementor-2151\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-ccc5b5e elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"ccc5b5e\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-234484e6\" data-id=\"234484e6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-422441a elementor-widget elementor-widget-text-editor\" data-id=\"422441a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p>For simplicity, let\u2019s assume there are three customers (c1, c2, c3) in this batch, and one vehicle (v1) information is provided as a sale.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>P(C=c1) represents the likelihood of c1 to buy any car. Assuming no prior knowledge about each customer, their likelihood of buying any car should be the same: P(C=c1) = P(C=c2) = P(C=c3), which equals a constant (e.g. 1\/3 in this situation)<\/li><li>P(V=v1) is the likelihood for v1 to be sold, given it is shown in this batch, this should be 1 (100% likelihood to be sold)<\/li><\/ul>\n\n\n\n<p>Since there is only one customer making the purchase, this probability can be extended into:<\/p>\n\n\n\n<p>P(V=v1) = P(C=c1, V=v1) + P(C=c2, V=v1) + P(C=c3, V=v1) = 1.0<\/p>\n\n\n\n<p>For each of the item, given the following formula<\/p>\n\n\n\n<p>P(C=c1, V=v1) = P(C=c1|V=v1) * P(V=v1) = P(V=v1|C=c1) * P(C=c1)<\/p>\n\n\n\n<p>We can see P(C=c1|V=v1) is proportional to P(V=v1|C=c1). So now, we can get the formula for the probability calculation:<\/p>\n\n\n\n<p>P(C=c1|V=v1) = P(V=v1|C=c1) \/ (P(V=v1|C=c1) + P(V=v1|C=c2) + P(V=v1|C=c3))<\/p>\n\n\n\n<p>and the key is to get the probability for each P(V|C). Such a formula can be verbally explained as: the likelihood for a vehicle to be purchased by a specific customer is proportional to the likelihood for the customer to buy this specific vehicle.<\/p>\n\n\n\n<p>The above formula may look too \u201cmathematical\u201d, so let me put it into an intuitive context: assuming three people were in a room, one is a musician, one is an athlete, and one is a data scientist. You were told there is a violin in this room belong to one of them. Now guess, whom do you think is the owner of the violin? This is pretty straightforward, right? given the likelihood of musician to own a violin is high, and the likelihood of athlete and data scientists to own a violin is lower, it is much more likely for the violin to belong to the musician. The \u201cmathematical\u201d thinking process is illustrated below.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-36488d65 elementor-widget elementor-widget-text-editor\" data-id=\"36488d65\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\nArtificial intelligence offers a lot of advantages for organisations by creating better and more efficient organisations, improving customer services with\u00a0<a href=\"https:\/\/vanrijmenam.nl\/how-develop-conversational-ai-business\/\" class=\"broken_link\" rel=\"noopener\">conversational AI<\/a>\u00a0and reducing a wide variety of risks in different industries. Although we are only at the start of the AI revolution, we can already see that artificial intelligence will have a profound effect on our lives, both positively and negatively.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-fd2c437 elementor-widget elementor-widget-text-editor\" data-id=\"fd2c437\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe financial impact of AI on the global economy is estimated to reach\u00a0<a href=\"https:\/\/www.pwc.com\/gx\/en\/issues\/analytics\/assets\/pwc-ai-analysis-sizing-the-prize-report.pdf\" rel=\"noopener\">US$15.7 trillion<\/a>\u00a0by 2030, with\u00a040% of\u00a0jobs expected to be lost due to\u00a0<a href=\"https:\/\/vanrijmenam.nl\/understanding-elements-of-ai\/\" class=\"broken_link\" rel=\"noopener\">artificial intelligence<\/a>, and global venture capital investment in AI is growing to greater than\u00a0<a href=\"https:\/\/www.stateof.ai\/\" rel=\"noopener\">US$27 billion<\/a>\u00a0in 2018. Such estimates of AI potential relate to a broad understanding of its nature and applicability.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7e0fdc3 elementor-widget elementor-widget-heading\" data-id=\"7e0fdc3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2><strong>The Rapid Developments of AI<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f8c9acf elementor-widget elementor-widget-text-editor\" data-id=\"f8c9acf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAI will eventually consist of entirely novel and unrecognisable forms of intelligence, and we can see the first signals of this in the rapid developments of AI.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1a76613 elementor-widget elementor-widget-text-editor\" data-id=\"1a76613\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn 2017, Google\u2019s Deepmind developed\u00a0<a href=\"https:\/\/deepmind.com\/blog\/article\/alphago-zero-starting-scratch\" rel=\"noopener\">AlphaGo Zero<\/a>, an AI agent that learned the abstract strategy board game Go with a far more expansive range of moves than chess. Within three days, by playing thousands of games against itself, and without the requirement of large volumes of data (which would normally be required in developing AI), the AI agent beat the original AlphaGo, an algorithm that had beaten 18-time world champion Lee Sedol.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0cffb5d elementor-widget elementor-widget-text-editor\" data-id=\"0cffb5d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAt the end of 2018, Deepmind went even further by creating\u00a0<a href=\"https:\/\/deepmind.com\/blog\/alphastar-mastering-real-time-strategy-game-starcraft-ii\/\" rel=\"noopener\">AlphaStar<\/a>. This AI system played against two grandmaster StarCraft II players and won. In a series of test matches, the AI agent won 5\u20130. Thanks to a \u00a0deep neural network, trained directly from raw game data via both supervised and reinforcement learning, it was able to secure the victory. It quickly surpassed professional players with its ability to combine short-term and long-term goals, respond appropriately to situations (even upon receipt of imperfect information) and adapt to unexpected events.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2e5c9a7 elementor-widget elementor-widget-text-editor\" data-id=\"2e5c9a7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn November 2018, China\u2019s state news agency Xinhua developed\u00a0<a href=\"https:\/\/www.theguardian.com\/world\/2018\/nov\/09\/worlds-first-ai-news-anchor-unveiled-in-china\" rel=\"noopener\">AI anchors<\/a>\u00a0to present the news. The AI agents are capable of simulating the voice, facial movements, and gestures of real-life broadcasters\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-56069ce elementor-widget elementor-widget-text-editor\" data-id=\"56069ce\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tEarly in 2019, OpenAI, (the now for-profit AI research organisation originally founded by Elon Musk) created\u00a0<a href=\"https:\/\/vanrijmenam.nl\/how-control-ai-becomes-too-advanced\/\" class=\"broken_link\" rel=\"noopener\">GPT2<\/a>. The AI system is so efficient in writing a text-based on just a few lines of input that OpenAI decided not to release the comprehensive research to the public, out of fear of misuse. Since then, there have been multiple successful\u00a0<a href=\"https:\/\/medium.com\/@NPCollapse\/replicating-gpt2-1-5b-86454a7f26af\" class=\"broken_link\" rel=\"noopener\">replications<\/a>\u00a0by others.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0faee6e elementor-widget elementor-widget-text-editor\" data-id=\"0faee6e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tA few months later, in April 2019,\u00a0<a href=\"https:\/\/openai.com\/five\/\" class=\"broken_link\" rel=\"noopener\">OpenAI<\/a>\u00a0trained five neural networks to beat a world champion e-sports team in the game\u00a0<em>Dota 2<\/em>\u2014a complex strategy game that requires players to collaborate to win. The five bots had learned the game by playing against itself at a rate of a staggering 180 years per day.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5b5cc00 elementor-widget elementor-widget-text-editor\" data-id=\"5b5cc00\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn September 2019, researchers at OpenAI trained two opposing AI agents to play the game\u00a0<a href=\"https:\/\/www.technologyreview.com\/s\/614325\/open-ai-algorithms-learned-tool-use-and-cooperation-after-hide-and-seek-games\/\" rel=\"noopener\">hide and seek<\/a>. After nearly 500 million games, the two artificial agents were capable of developing complex hiding and seeking strategies that involved tool use and collaboration.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e1f77f6 elementor-widget elementor-widget-text-editor\" data-id=\"e1f77f6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAI is also rapidly moving outside the research domain. Most of us have become so familiar with the recommendation engines of Netflix, Facebook or Amazon; AI personal assistants such as Siri, Alexa or Home; AI lawyers such as ROSS; AI doctors such as IBM Watson; AI autonomous cars developed by Tesla or AI facial recognition developed by numerous companies. AI is here and ready to change our society.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9d76684 elementor-widget elementor-widget-heading\" data-id=\"9d76684\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2><strong>The Rapid AI Developments Should be a Cause for Concern<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c7584d7 elementor-widget elementor-widget-text-editor\" data-id=\"c7584d7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tHowever, although AI is being developed as such a rapid pace, AI itself is still far from perfect. Too often, artificial intelligence is biased. Biased AI is a serious problem and challenging to solve as AI is trained using biased data and developed by biased humans. This has resulted in many examples of AI gone rogue, including facial recognition systems not recognising people with dark skin or Google translate that is gender-biased (the Turkish language is gender-neutral, but Google swaps the gender of pronouns when translating).\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-16c0edd elementor-widget elementor-widget-text-editor\" data-id=\"16c0edd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tWe have also seen many examples where AI has been used to cause harm deliberately. These examples include deepfakes to generate fake news and hoaxes or privacy-invading facial recognition cameras. On the other hand, machine learning models can also easily be tricked by using an unnoticeable universal noise filter, where adding a sticker next to an image completely changes the output.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9ddf5bc elementor-widget elementor-widget-text-editor\" data-id=\"9ddf5bc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tCase in point is that although researchers are rapidly developing very advanced AI, there are still a lot of problems with AI when we bring AI into the real world. The more advanced AI becomes, and the less we understand the inner workings of AI, the more problematic this becomes.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3824245 elementor-widget elementor-widget-heading\" data-id=\"3824245\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">\n<h2><strong>We Need to Develop Responsible Artificial Intelligence<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ba7fc18 elementor-widget elementor-widget-text-editor\" data-id=\"ba7fc18\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tGovernments and organisations are all in an AI arms race to be the first to develop Artificial General Intelligence (AGI) or Super Artificial Intelligence (SAI). AGI refers to AI systems having autonomous self-control and self-understanding and the ability to learn new things to solve a wide variety of problems in different contexts. SAI is intelligence far exceeding that of any person, however clever. SAI would be able to manipulate and control humans as well as other artificial intelligent agents and achieve domination. The\u00a0<a href=\"https:\/\/www.nytimes.com\/2019\/10\/08\/opinion\/artificial-intelligence.html\" class=\"broken_link\" rel=\"noopener\">arrival of SAI<\/a>\u00a0will be the most significant event in human history, and if we get it wrong, we have a serious problem.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3bc911e elementor-widget elementor-widget-text-editor\" data-id=\"3bc911e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\nTherefore, we need to shift our focus from rapidly developing the most advanced AI to developing Responsible AI, where organisations exercise strict control, supervision and monitoring on the performance and actions of AI.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d1af8ea elementor-widget elementor-widget-text-editor\" data-id=\"d1af8ea\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tFailures in achieving Responsible AI can be divided into two, non-mutually exclusive categories:\u00a0<a href=\"https:\/\/intelligence.org\/files\/AIPosNegFactor.pdf\" rel=\"noopener\">philosophical failure and technical failure<\/a>. Developers can build the wrong thing so that even if AGI or SAI is achieved, it will not be beneficial to humanity. Or developers can attempt to do the right thing but fail because of a lack of technical expertise, which would prevent us from achieving AGI or SAI in the first place. The border between these two failures is thin, because \u2018in theory, you ought first to say what you\u00a0<em>want<\/em>, then figure out\u00a0<em>how\u00a0<\/em>to get it. In practice, it often takes a deep technical understanding to figure out what you want\u2019 [1].\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0a8fbbc elementor-widget elementor-widget-text-editor\" data-id=\"0a8fbbc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tNot everyone believes in the existential risks of AI, simply because they say AGI or SAI will not cause any problems or because if existential risks do indeed exist, AI itself will solve these risks, which means that in both instances,\u00a0<a href=\"https:\/\/www.newyorker.com\/magazine\/2015\/11\/23\/doomsday-invention-artificial-intelligence-nick-bostrom\" rel=\"noopener\">nothing happens<\/a>.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6d42676 elementor-widget elementor-widget-text-editor\" data-id=\"6d42676\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tNevertheless, SAI is likely to be extremely powerful and dangerous if not appropriately controlled. Simply because AGI and SAI will be able to reshape the world according to its preferences, this may not be human-friendly. Not because it would hate humans, but because it would not care about humans.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a770198 elementor-widget elementor-widget-text-editor\" data-id=\"a770198\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIt is the same that you might go \u2018out of your way\u2019 to prevent on stepping on an ant when walking, but you would not care about a large ants nest if it happened to be at a location where you plan a new apartment building. In addition, SAI will be\u00a0<a href=\"https:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf\" rel=\"noopener\">capable<\/a>\u00a0of resisting any human control. As such, AGI and SAI offer different risks than any other known existential risk humans faced before, such as nuclear war, and requires a fundamentally different approach.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b2d9e6f elementor-widget elementor-widget-heading\" data-id=\"b2d9e6f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><strong>Algorithms are Literal<\/strong><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5c68609 elementor-widget elementor-widget-text-editor\" data-id=\"5c68609\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tTo make matters worse, algorithms, and therefore AI, are extremely literal. They pursue their (ultimate) goal literally and do exactly what is told while ignoring any other, important, consideration. An algorithm only understands what it has been explicitly told. Algorithms are not yet, and perhaps never will be, smart enough to know what it does not know. As such, it might miss vital considerations that we humans might have thought off automatically.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-c93c749 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c93c749\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-28cb159\" data-id=\"28cb159\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0b10181 elementor-widget elementor-widget-text-editor\" data-id=\"0b10181\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tSo, it is important to tell an algorithm as much as possible when developing it. The more you tell, i.e. train, the algorithm, the more it considers. Next to that, when designing the algorithm, you must be crystal clear about what you want the algorithm to do and not to do. Algorithms focus on the data they have access to and often that data has a short-term focus. As a result, algorithms tend to focus on the short term. Humans, most of them anyway, understand the importance of a long-term approach. Algorithms do not unless they are told to focus on the long-term.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3608d04 elementor-widget elementor-widget-text-editor\" data-id=\"3608d04\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tDevelopers (and managers) should ensure algorithms are consistent with any long-term objectives that have been set within the area of focus. This can be achieved by offering a wider variety of data sources (the context) to incorporate into its decisions and focusing on so-called soft goals as well (which relates to behaviours and attitudes in others). Using a variety of long-term- and short-term-focused data sources, as well as offering algorithms\u00a0<a href=\"https:\/\/hbr.org\/2016\/01\/algorithms-need-managers-too\" rel=\"noopener\">soft goals and hard goals<\/a>, will help to create a stable algorithm.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ac04152 elementor-widget elementor-widget-heading\" data-id=\"ac04152\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><strong>Unbiased, Mixed Data Approach to Include the Wider Context<\/strong><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5230c99 elementor-widget elementor-widget-text-editor\" data-id=\"5230c99\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tApart from the right goals, the most critical aspect in developing the right AI is to use unbiased data to train an AI agent as well as minimise the influence of the biased developer. An approach where AI learns by playing against itself and is only given the rules of a game can help in that instance.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3a2ad78 elementor-widget elementor-widget-text-editor\" data-id=\"3a2ad78\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tHowever, not all AI can be developed by playing against itself. Many AI systems still require data, and they require a lot of, unbiased, data. When training an AI agent, a\u00a0<a href=\"https:\/\/datafloq.com\/read\/simplest-explanation-of-big-data\/43\" rel=\"noopener\">mixed data<\/a>\u00a0approach can help to calibrate the different data sources for their relative importance, resulting in better predictions and better algorithms. The more data sources and the more diverse these are, the better the predictions of the algorithms will become.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9596c3f elementor-widget elementor-widget-text-editor\" data-id=\"9596c3f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThis will enable AI learns from its environment and improve over time due to deep learning and machine learning. AI is not limited by information overload, complex and dynamic situations, lack of complete understanding of the environment (due to unknown unknowns), or overconfidence in its own knowledge or influence. It can take into account all available data, information and knowledge and is not influenced by emotions.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b225a43 elementor-widget elementor-widget-text-editor\" data-id=\"b225a43\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAlthough reinforcement learning, and increasingly transfer learning \u2013 applying knowledge learned in one domain in a different but related domain \u2013 would allow AI to rework its internal workings, AI is not yet sentient, cognisant or self-aware. That is, it cannot\u00a0<a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/01402390.2015.1088838\" class=\"broken_link\" rel=\"noopener\">interpret meaning from data<\/a>. AI might recognise a cat, but it does not know what a cat is. To the AI, a cat is a collection of pixel intensities, not a carnivorous mammal often kept as an indoor pet.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f46943a elementor-widget elementor-widget-heading\" data-id=\"f46943a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h3><strong>AI are Black Boxes<\/strong><\/h3><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d0c1dd2 elementor-widget elementor-widget-text-editor\" data-id=\"d0c1dd2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAnother problem with AI is that they are\u00a0<a href=\"https:\/\/medium.com\/@markvanrijmenam\/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438?source=friends_link&amp;sk=65c7d463f9fd4a54591f4be23fe2f31b\" class=\"broken_link\" rel=\"noopener\">black boxes<\/a>. Often, we do not know why an algorithm comes to a certain decision. They can make great predictions, on a wide range of topics, but that does not mean AI decisions are error-free. On the contrary, as we have seen with\u00a0<em>Tay<\/em>. AI \u2018preserves the biases inherent in the dataset and its underlying code\u2019, resulting in biased outputs that could\u00a0<a href=\"https:\/\/www.amazon.com\/Weapons-Math-Destruction-Increases-Inequality\/dp\/0553418815\" rel=\"noopener\">inflict significant damage<\/a>.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c43aa3c elementor-widget elementor-widget-video\" data-id=\"c43aa3c\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;youtube_url&quot;:&quot;https:\\\/\\\/www.youtube.com\\\/embed\\\/R9OHn5ZF4Uo?enablejsapi=1&amp;autoplay=0&amp;cc_load_policy=0&amp;iv_load_policy=1&amp;loop=0&amp;modestbranding=0&amp;rel=0&amp;fs=1&amp;playsinline=0&amp;autohide=2&amp;theme=dark&amp;color=red&amp;controls=1&amp;amp&quot;,&quot;video_type&quot;:&quot;youtube&quot;,&quot;controls&quot;:&quot;yes&quot;}\" data-widget_type=\"video.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-wrapper elementor-open-inline\">\n\t\t\t<div class=\"elementor-video\"><\/div>\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-29723e7 elementor-widget elementor-widget-text-editor\" data-id=\"29723e7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn addition, how much are these predictions worth, if we don\u2019t understand the reasoning behind it? Automated decision-making is great until it has a negative outcome for you or your organisation, and you cannot change that decision or, at least, understand the rationale behind that decision.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cb1abf4 elementor-widget elementor-widget-text-editor\" data-id=\"cb1abf4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tWhatever happens inside an algorithm is sometimes only known to the organisation that uses it, yet quite often this goes beyond their understanding as well. Therefore, it is important to have explanatory capabilities within the algorithm, to understand why a certain decision was made.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7d82a48 elementor-widget elementor-widget-heading\" data-id=\"7d82a48\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2><strong>The Need for Explainable AI<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-755a238 elementor-widget elementor-widget-text-editor\" data-id=\"755a238\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe term Explainable AI (XAI) was first coined in 2004 as a way to offer users of AI an easily understood chain of reasoning on the decisions made by the AI, in this case especially for simulation games [2]. XAI relates to explanatory capabilities within an algorithm to help understand why certain decisions were made. With machines getting more responsibilities, they should be held accountable for their actions. XAI should present the user with an easy to understand the chain of reasoning for its decision. When AI is capable of asking itself the right questions at the right moment to explain a certain action or situation, basically debugging its own code, it can create trust and improve the overall system.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-de91bfa elementor-widget elementor-widget-text-editor\" data-id=\"de91bfa\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tExplainable AI should be an important aspect of any algorithm. When the algorithm can explain why certain decisions have been \/ will be made and what the strengths and weaknesses of that decision are, the algorithm becomes accountable for its actions. Just like humans are. It can then be altered and improved if it becomes (too) biased or if it becomes too literal, resulting in better AI for everyone.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-60821f9 elementor-widget elementor-widget-heading\" data-id=\"60821f9\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2><strong>Ethical AI and Why That is So Difficult<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f7aa5c9 elementor-widget elementor-widget-text-editor\" data-id=\"f7aa5c9\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tResponsible AI can be achieved by using unbiased data, minimising the influence of biased developers, having a mixed data approach to include the context and by developing AI that can explain itself. The final step in developing responsible Ai is by incorporating ethics into AI.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-62695de elementor-widget elementor-widget-text-editor\" data-id=\"62695de\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tEthical AI is completely different from XAI, and it is an\u00a0<a href=\"https:\/\/venturebeat.com\/2019\/10\/07\/how-to-operationalize-ai-ethics\/\" class=\"broken_link\" rel=\"noopener\">enormous challenge to achieve<\/a>. The difficulty with creating an AI capable of ethical behaviour is that ethics can be variable, contextual, complex and changeable [3, 4, 5]. The ethics we valued 300 years ago are not the same in today\u2019s world. What we deem ethical today might be illegal tomorrow. As such, we do not want ethics in AI to be fixed, as it could limit its potential and affect society.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c4da6c0 elementor-widget elementor-widget-text-editor\" data-id=\"c4da6c0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\nAI ethics is a difficult field because the future behaviour of advanced forms of a self-improving AI are difficult to understand if the AI changes its inner workings without providing insights on it; hence, the need for XAI. Therefore, ethics should be part of AI design today to ensure ethics is part of the code. We should bring ethics to the code. However, some\u00a0<a href=\"https:\/\/www.nytimes.com\/2009\/04\/07\/opinion\/07Brooks.html\" class=\"broken_link\" rel=\"noopener\">argue<\/a>\u00a0that ethical choices can only be made by beings that have emotions since ethical choices are generally motivated by these.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c928d29 elementor-widget elementor-widget-text-editor\" data-id=\"c928d29\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAlready in 1677, Benedictus de Spinoza, one of the great rationalists of 17th-century philosophy, defined moral agency as \u2018emotionally motivated rational action to preserve one\u2019s own physical and mental existence within a community of other rational actors\u2019. However, how would that affect artificial agents and how would AI ethics change if one sees AI as moral things that are sentient and sapient? When we think about applying ethics in an artificial context, we have to be careful \u2018not to mistake mid-level ethical principles for foundational normative truths\u2019 [6].\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-31ec5a7 elementor-widget elementor-widget-heading\" data-id=\"31ec5a7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><strong>Good versus Bad Decisions<\/strong><\/h3><\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0297c3f elementor-widget elementor-widget-text-editor\" data-id=\"0297c3f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn addition, the problem we face when developing AI ethics, or machine ethics, is that it relates to\u00a0<em>good\u00a0<\/em>and\u00a0<em>bad\u00a0<\/em>decisions. Yet, it is unclear what\u00a0<a href=\"https:\/\/www.shutterstock.com\/g\/kanishiotu\" rel=\"noopener\"><em>good<\/em>\u00a0or\u00a0<\/a><em>bad\u00a0<\/em>means. It means something different for everyone across time and space. What is defined good in the Western world might be considered bad in Asian culture and vice versa. Furthermore, machine ethics are likely to be superior to human ethics.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8745216 elementor-widget elementor-widget-text-editor\" data-id=\"8745216\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tFirst, because humans tend to make estimations, while machines can generally calculate the outcome of a decision with more\u00a0<a href=\"https:\/\/www.aaai.org\/Papers\/Workshops\/2004\/WS-04-02\/WS04-02-008.pdf\" rel=\"noopener\">precision<\/a>. Secondly, humans do not necessarily consider all options and may favour partiality, while machines can consider all options and be strictly impartial. Third, machines are unemotional, while with humans, emotions can limit decision-making capabilities (although at times, emotions can also be very useful in decision-making). Although it is likely that AI ethics will be superior to human ethics, it is still far away.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-206bc74 elementor-widget elementor-widget-text-editor\" data-id=\"206bc74\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe technical challenges to instil ethics within algorithms are numerous because as their social impact increases, ethical problems increase as well. However, the behaviour of AI is not only influenced by the mathematical models that make up the algorithm but also directly influenced by the data the algorithm processes. As mentioned, poorly prepared or biased data results in incorrect outcomes: \u2018<em><a href=\"https:\/\/vanrijmenam.nl\/big-data-scientists-can-benefit-human-judgment\/\" class=\"broken_link\" rel=\"noopener\">garbage in is garbage out<\/a>\u2019<\/em>. While incorporating ethical behaviour in mathematical models is a daunting task, reducing bias in data can be achieved more easily using\u00a0<a href=\"https:\/\/medium.com\/dataseries\/seven-guidelines-to-ensure-ethical-ai-9c2422cdc744\" class=\"broken_link\" rel=\"noopener\">data governance<\/a>.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a71fe40 elementor-widget elementor-widget-heading\" data-id=\"a71fe40\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\"><h3><strong>The Theoretical Concept of Coherent Extrapolated Volition<\/strong><\/h3>\n<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e67c5cc elementor-widget elementor-widget-text-editor\" data-id=\"e67c5cc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tHigh-quality, unbiased data, combined with the right processes to ensure ethical behaviour within a digital environment, could significantly contribute to AI that can behave ethically. Of course, from a technical standpoint, ethics is more than just usage of high-quality, unbiased data and having the right governance processes in place. It includes instilling AI with the right ethical values that are flexible enough to change over time.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3f022cf elementor-widget elementor-widget-text-editor\" data-id=\"3f022cf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tTo achieve this, we need to consider the morals and values that have not yet developed and remove those that might be wrong. To understand how difficult this is, let\u2019s see how Nick Bostrom \u2013 Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute \u2013 and Eliezer Yudkowsky \u2013 an artificial intelligence theorist concerned with self-improving AIs \u2013 describe achieving ethical AI [7]:\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c40df23 elementor-widget elementor-widget-text-editor\" data-id=\"c40df23\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote>The theoretical concept of coherent extrapolated volition (CEV) is our best option to instil the values of mankind in AI. CEV is how we would build AI if \u2018we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted\u2019.<\/blockquote>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7f189b0 elementor-widget elementor-widget-text-editor\" data-id=\"7f189b0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAs may be evident by CEV, achieving ethical AI is a highly challenging task that requires special attention if we wish to build Responsible AI. Those stakeholders involved in developing advanced AI should play a key role in achieving AI ethics.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ccf718e elementor-widget elementor-widget-heading\" data-id=\"ccf718e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2><strong>Final Words<\/strong><\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5395e5c elementor-widget elementor-widget-text-editor\" data-id=\"5395e5c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAI is going to become a very important aspect of\u00a0<a href=\"https:\/\/amzn.to\/31mL436\" rel=\"noopener\">the organisation of tomorrow<\/a>. Although it is unsure what AI will bring us in the future, it is safe to say that there will be many more missteps before we manage to build Responsible AI. Such is the nature of humans.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cb7188d elementor-widget elementor-widget-text-editor\" data-id=\"cb7188d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tMachine learning has huge risks, and although extensive testing and governance processes are required, not all organisations will do so for various reasons. Those organisations that can implement the right stakeholder management to determine whether AI is on track or not and pull or tighten the parameters around AI if it is not will stand the best chance to benefit from AI. However, as a society, we should ensure that all organisations \u2013 and governments \u2013 will adhere to using unbiased data, to minimising the influence of biased developers, to having a mixed data approach to include the context, to developing AI that can explain itself and to instil ethics into AI.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2526d54 elementor-widget elementor-widget-text-editor\" data-id=\"2526d54\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn the end, AI can bring a lot of advantages to organisations, but it requires the right regulation and control methods to prevent bad actors from creating bad AI and to prevent well-intentioned AI from going rogue. A daunting task, but one we cannot ignore.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-adaee88 elementor-widget elementor-widget-text-editor\" data-id=\"adaee88\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<em>This is an edited excerpt from my latest book. If you want to read more about how you can ensure ethical AI in your organisation, you can read the book\u00a0<a href=\"https:\/\/amzn.to\/31mL436\" rel=\"noopener\">The Organisation of Tomorrow<\/a>.<\/em>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3ea0836 elementor-widget elementor-widget-text-editor\" data-id=\"3ea0836\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<strong>References<\/strong>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8ca7d23 elementor-widget elementor-widget-text-editor\" data-id=\"8ca7d23\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[1] Yudkowsky, E.,\u00a0<em>Artificial intelligence as a positive and negative factor in global risk.<\/em>\u00a0Global catastrophic risks, 2008.\u00a0<strong>1<\/strong>: p. 303.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-906da44 elementor-widget elementor-widget-text-editor\" data-id=\"906da44\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[2] Van Lent, M., W. Fisher, and M. Mancuso.\u00a0<em>An explainable artificial intelligence system for small-unit tactical behavior<\/em>. in\u00a0<em>The 19th National Conference on Artificial Intelligence<\/em>. 2004. San Jose: AAAI.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-24775e7 elementor-widget elementor-widget-text-editor\" data-id=\"24775e7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[3] Bostrom, N. and E. Yudkowsky,\u00a0<em>The ethics of artificial intelligence.<\/em>\u00a0The Cambridge Handbook of Artificial Intelligence, 2014: p. 316-334.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-305ad8d elementor-widget elementor-widget-text-editor\" data-id=\"305ad8d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[4] Hurtado, M.,\u00a0<em>The Ethics of Super Intelligence.<\/em>\u00a0International Journal of Swarm Intelligence and Evolutionary Computation, 2016.\u00a0<strong>2016<\/strong>.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8933113 elementor-widget elementor-widget-text-editor\" data-id=\"8933113\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[5] Anderson, M. and S.L. Anderson,\u00a0<em>Machine ethics<\/em>. 2011: Cambridge University Press.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4e0bffe elementor-widget elementor-widget-text-editor\" data-id=\"4e0bffe\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[6] Bostrom, N. and E. Yudkowsky,\u00a0<em>The ethics of artificial intelligence.<\/em>\u00a0The Cambridge Handbook of Artificial Intelligence, 2014: p. 316-334.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e77eb84 elementor-widget elementor-widget-text-editor\" data-id=\"e77eb84\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t[7] Bostrom, N.,\u00a0<em>Superintelligence: Paths, dangers, strategies<\/em>. 2014: OUP Oxford.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>For simplicity, let\u2019s assume there are three customers (c1, c2, c3) in this batch, and one vehicle (v1) information is provided as a sale. P(C=c1) represents the likelihood of c1 to buy any car. Assuming no prior knowledge about each customer, their likelihood of buying any car should be the same: P(C=c1) = P(C=c2) =<\/p>\n","protected":false},"author":550,"featured_media":3124,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[3218],"class_list":["post-2151","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":3218,"user_id":550,"is_guest":0,"slug":"mark-van-rijmenam","display_name":"Mark Rijmenam","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_6e38fd10-e332-41e3-98b2-8874278fc272-150x150.jpg","user_url":"https:\/\/datafloq.com\/","last_name":"Rijmenam","first_name":"Mark","job_title":"","description":"Dr Mark van Rijmenam is the founder of\u00a0<a href=\"https:\/\/datafloq.com\/\">Datafloq<\/a>\u00a0and\u00a0<a href=\"https:\/\/imagjn.com\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">Imagjn<\/a>.\u00a0He is a globally recognised speaker on big data, blockchain and AI, a strategist, influencer and author of\u00a0<a href=\"https:\/\/www.amazon.com\/s?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq&amp;i=stripbooks&amp;rh=p_27%3AVan+Rijmenam%2C+Mark&amp;s=relevancerank&amp;text=Van+Rijmenam%2C+Mark&amp;ref=dp_byline_sr_book_1\" target=\"_blank\" rel=\"noopener\">3 management books<\/a>. His last book,\u00a0<a href=\"https:\/\/vanrijmenam.nl\/the-organisation-of-tomorrow\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">The Organisation of Tomorrow<\/a>, discusses how AI, blockchain and analytics turn every business into a data organisation. He has been named a global thought leader on\u00a0<a href=\"https:\/\/onalytica.com\/blog\/posts\/big-data-2016-top-100-influencers-and-brands\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">big data<\/a>,\u00a0<a href=\"https:\/\/www.thinkers360.com\/top-50-global-thought-leaders-and-influencers-on-blockchain-november-2019\/\">blockchain<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.thinkers360.com\/top-20-global-thought-leaders-and-influencers-on-artificial-intelligence-september-2019\/\">artificial intelligence<\/a>. He holds a PhD in Management from the University of Technology, Sydney. He is a strategic advisor to several blockchain startups and publisher of\u00a0<a href=\"https:\/\/vanrijmenam.nl\/subscribe-to-newsletter\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">the \u2018f(x) = ex\u2018 newsletter<\/a>."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/550"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=2151"}],"version-history":[{"count":5,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2151\/revisions"}],"predecessor-version":[{"id":35858,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/2151\/revisions\/35858"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/3124"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=2151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=2151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=2151"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=2151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}