{"id":22687,"date":"2021-03-16T04:05:00","date_gmt":"2021-03-16T04:05:00","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/explainable-ai-do-we-trust-ai-enough-to-make-decisions-for-us\/"},"modified":"2023-08-29T13:12:18","modified_gmt":"2023-08-29T13:12:18","slug":"explainable-ai-do-we-trust-ai-enough-to-make-decisions-for-us","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/explainable-ai-do-we-trust-ai-enough-to-make-decisions-for-us\/","title":{"rendered":"Explainable AI: Do We Trust AI Enough To Make Decisions For Us?"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"22687\" class=\"elementor elementor-22687\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-b1ea349 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"b1ea349\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-aba8f12\" data-id=\"aba8f12\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-c59d94d elementor-widget elementor-widget-text-editor\" data-id=\"c59d94d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>AI has engrossed every aspect of our everyday lives and expanded in various industries, causing different levels of disruption. Today, AI-informed decisions are no longer confined to tech hubs and data science experts. <a href=\"https:\/\/www.experfy.com\/blog\/ai-ml\/what-is-explainable-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">Artificial intelligence<\/a> is increasingly empowering an array of stakeholders and functions, including business leaders, managers, executives, government entities and customers. But as more business decisions are influenced by AI, it opens numerous questions and requirements for greater oversight, transparency and ethics in how those decisions are made \u2014 leading to rising demand for&nbsp;explainable AI&nbsp;or <a href=\"https:\/\/read.hyperight.com\/ai-predictions-2021-frontiers-where-ai-will-reinforce-its-presence\/\" target=\"_blank\" rel=\"noreferrer noopener\">XAI<\/a>.<\/p>\n<p>After all, if we entrust AI with our life and business decisions, we have to be sure its reasoning is accountable and ethical. Business has to be able to explain the logic behind their decisions because they directly influence the company revenue. The demand for AI explainability has been even more emphasised after a surge of criticism towards numerous critical failures like bias in recruiting and credit scoring, racial discrimination in facial recognition software, unfair criminal risk assessment and autonomous cars involved in accidents.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4a37708 elementor-widget elementor-widget-heading\" data-id=\"4a37708\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">The black box problem \u2013 Can it be turned into a glass box?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3d643db elementor-widget elementor-widget-text-editor\" data-id=\"3d643db\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>While discussion about explainable AI is not new and goes back several decades, it reemerged with greater intensity in 2019 with&nbsp;<a href=\"https:\/\/towardsdatascience.com\/is-explainable-ai-xai-the-next-step-or-just-hype-b3d4c3768c62\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Google\u2019s announcement of its new set of XAI tools for developers<\/a>. AI models have presented, and for some people still are, a \u201c<a href=\"https:\/\/medium.com\/pgs-software\/ai-explainability-what-if-the-computer-says-no-3e94220a5b53\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">black box<\/a>\u201d that relies on millions or billions of complex, interwoven parameters in order to deliver outcomes that we should trust and act upon, even though they may seem wrong or counter-intuitive at first. A good example of the \u201cblack box\u201d is deep learning models or neural networks. They are trained on large datasets and can output highly accurate predictions, but still incomprehensible to humans who can\u2019t grasp the complex internal workings, features and data representations that the models used to deliver outcomes.<\/p>\n<p>These outcomes may cause a far-reaching and intense impact, leading to louder demand for XAI. However, experts note that some models like decision trees and Bayesian classifiers are easier to interpret than deep learning models used in image recognition and NLP. It\u2019s also important to mention that there\u2019s&nbsp;<a href=\"https:\/\/www.campaignasia.com\/article\/explainable-ai-turning-the-black-box-into-a-glass-box\/466224\" target=\"_blank\" rel=\"noreferrer noopener\">a trade-off between accuracy and explainability<\/a>, as not all bias is negative and can be leveraged to make more accurate predictions. Fortunately, explainable AI can help us understand if a model uses good bias or bad bias to make a decision, and which factors are essential when a model makes a decision.<\/p>\n<p>\u201cThere is a lot of talk about AI in the light of ethics, accountability, explainability, which was previously the domain of the humanities in academia. It\u2019s a bit of a novelty for the tech community to be so heavily focused on ethics. But AI, as a label for so many technologies, is so transformative, that we need to give thought to ethical implications of it,\u201d stated&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=QEDN6RvYBgM\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Patrick Couch<\/strong><\/a>, Business Developer AI &amp; IoT at IBM, during a panel on&nbsp;<strong><em>How to Build Human Centered and Explanaible AI Models and Products<\/em><\/strong>, at the Data Innovation Summit 2020.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-73704fc elementor-widget elementor-widget-video\" data-id=\"73704fc\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;youtube_url&quot;:&quot;https:\\\/\\\/youtu.be\\\/QEDN6RvYBgM&quot;,&quot;video_type&quot;:&quot;youtube&quot;,&quot;controls&quot;:&quot;yes&quot;}\" data-widget_type=\"video.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-wrapper elementor-open-inline\">\n\t\t\t<div class=\"elementor-video\"><\/div>\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6a96a30 elementor-widget elementor-widget-text-editor\" data-id=\"6a96a30\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>When it comes to human-centric AI, the imperative is to make sure the powerful capabilities that are promoted are also understood by the users, to be sure that the technology serves the right purpose, added Patrick. Explainability and ethics are challenges for AI, and they are related to the data required to get the magic out of the technology, he explained.<\/p><p>\u201cOver the years, we\u2019ve seen a tremendous amount of funny, weird, sad, tragic examples of AI applications gone wrong,\u201d said Patrick. When IBM was faced with the challenge of bias in facial recognition software, they immediately jumped to solving it by acknowledging and mitigating the bias in data sets for their AI capability.<\/p>\n<p>AI\u2019s ability to not only make predictions, but explain why it made them is especially important in healthcare, where a wrong prediction can cost a human life. \u201cBeing able to explain why you propose a certain medication or treatment is key in medical care,\u201d emphasised&nbsp;<strong>Stefan Vlachos<\/strong>, Head of the Center for Innovation at Karolinska University Hospital and Board Member of \u201cThe Innovation Leaders\u201d in his&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=whRB7l3-0Ws\" target=\"_blank\" rel=\"noreferrer noopener\">AIAW Podcast discussion<\/a>. \u201cIf we are do build trust between man and machine, you have to be able to backtrack AI\u2019s suggestions and question how it got to them,\u201d he added.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-183be12 elementor-widget elementor-widget-image\" data-id=\"183be12\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657.jpg\" class=\"attachment-large size-large wp-image-18944\" alt=\"Explainable AI: Do we trust AI enough to make decisions for us?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657.jpg 1000w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-300x300.jpg 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-150x150.jpg 150w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-768x768.jpg 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-610x610.jpg 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-75x75.jpg 75w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1156132657-750x750.jpg 750w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Image by&nbsp;<a href=\"https:\/\/www.shutterstock.com\/g\/Connect_world\" target=\"_blank\" rel=\"noopener\">Connect world<\/a>\/<a href=\"https:\/\/www.shutterstock.com\/home\" target=\"_blank\" rel=\"noopener\">Shutterstock.com<\/a><\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4fe0b4b elementor-widget elementor-widget-heading\" data-id=\"4fe0b4b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Trust built on explainability, not understandability<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2e032b5 elementor-widget elementor-widget-text-editor\" data-id=\"2e032b5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In terms of AI in medical diagnosis, experts put higher standards on it to explain the reason for its decisions and demand to see how the model works, ask for insight, introspection into the parameters and look for introspective motivation about how the model came up with the solution. Suppose roles were reversed and a human doctor gave the same diagnosis. In that case, people don\u2019t demand to backtrack their decision and look in their neurons to see how they came to that conclusion, revered&nbsp;<strong>Anders Arpteg<\/strong>, Head of Research at Peltarion.<\/p><p>We often hear people say, \u2018I don\u2019t trust AI models because I can\u2019t understand them. This is a scary statement because it\u2019s like saying to a person: \u2018I don\u2019t trust you because I don\u2019t understand you\u2019 instead of saying \u2018If you explain your decision, I would trust you even though I don\u2019t understand you,\u2019 stated Anders.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-847ab66 elementor-widget elementor-widget-image\" data-id=\"847ab66\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1000\" height=\"417\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324.jpg\" class=\"attachment-large size-large wp-image-18945\" alt=\"Everything Possible\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324.jpg 1000w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324-300x125.jpg 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324-768x320.jpg 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324-610x254.jpg 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/shutterstock_1218220324-750x313.jpg 750w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Image by <a href=\"https:\/\/www.shutterstock.com\/g\/everythingpossible\" target=\"_blank\" rel=\"noopener\">everything possible<\/a>\/<a href=\"https:\/\/www.shutterstock.com\/home\" target=\"_blank\" rel=\"noopener\">Shutterstock.com<\/a><\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-678f048 elementor-widget elementor-widget-text-editor\" data-id=\"678f048\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The AI explainability question comes down to what it means to be fundamentally human and how we build trust as humans. Does building trust in AI mean understanding all models and their parameters? Considering AI expansion in all spheres of life and the different people from innumerable backgrounds and fields it touches, that would be impossible.<\/p>\n<p>\u201cWhen we talk about explainable AI, it does not have to be about understanding the intricacies of the entire model but understanding what factors can influence the output of that model. There is a significant difference between understanding how a model works and understanding why it gives a particular result,\u201d states Dr Shou-De Lin, Chief Machine Learning Scientist at Appier for Campaign Asia.<\/p>\n<p>\u201cI think you need to be accustomed to it. Just like any relationship, you need to work together for a while and get to know each other to see that you are on the same path,\u201d asserted Stefan Vlachos.<\/p>\n<p>\u201cTrust is built because of explainability, but not because of understandability \u2013 these two are very different things,\u201d emphasised Anders Arpteg.<\/p><p>This was the point Cassie Kozyrkov, Chief Decision Scientist at Google, also made in her article on&nbsp;<a href=\"https:\/\/medium.com\/hackernoon\/explainable-ai-wont-deliver-here-s-why-6738f54216be\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">why Explainable AI won\u2019t deliver<\/a>. What Cassie has so well explained is that we can\u2019t expect to have a simple answer to&nbsp;<em>how&nbsp;<\/em>an AI model made the decision, because it was built to solve a complex problem with a complex solution \u2013 a solution that is so entangled that it eludes the capacity of our human mind.<\/p><p>As Cassie brilliantly explained, \u201cAI is all about automating the ineffable, but don\u2019t expect the ineffable to be easy to wrap your head around.\u201d She is by no means saying that interpretability, transparency, and explainability aren\u2019t important. But instead, their place is in analytics, she added. It all comes down to the purpose of the AI model: research or business goal.<\/p>\n<p>\u201cMuch of the confusion comes from not knowing which AI business you\u2019re in. Arguments that are appropriate for researchers (the mechanics of how something works \u2013 interpretability ) make little sense for those who apply AI (performance),\u201d explains Cassie.<\/p>\n<p>We again come to the trade-off, which in Cassie\u2019s example is between&nbsp;<em>interpretability&nbsp;<\/em>and&nbsp;<em>performance<\/em>. When the model is so simple that we can understand it, it can\u2019t solve complex problems. But if we need a model to solve a really complex task, we shouldn\u2019t limit it to only what our minds can wrap around. So how can we trust that our complex models are working? By carefully testing our system, making sure that it works as it is supposed to \u2014 this is how we gain trust in it, adds Cassie.<\/p>\n<p>Cassie also points out that we are&nbsp;<strong><em>holding AI to superhuman standards<\/em><\/strong>. \u201cIf you require an interpretation of how a person came to a decision at the model level, you should only be satisfied with an answer in terms of electrical signals and neurotransmitters moving from brain cell to brain cell. Do any of your friends describe the reason they ordered coffee instead of tea in terms of chemicals and synapses? Of course not,\u201d she explains.<\/p>\n<p>Just like humans make up an oversimplified reason that fits their inputs and outputs (decisions), we can have the same level of model explainability in the input and output data, and this is where analytics comes into play. \u201c<strong><em>Explainability provides a cartoon sketch of a why, but it doesn\u2019t provide the how of decision-making<\/em><\/strong>,\u201d adds Cassie.<\/p>\n<p>Undeniably, with the advancement of AI, humanity has turned a new leaf of complicated solutions that are beyond our understanding, and as Stefan Vlachos stated above, and as Cassie also contends, it\u2019s a reality we should get accustomed to.<\/p>\n\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-846ced7 elementor-widget elementor-widget-image\" data-id=\"846ced7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1000\" height=\"563\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920.jpg\" class=\"attachment-large size-large wp-image-18946\" alt=\"Explainable AI: Do we trust AI enough to make decisions for us?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920.jpg 1000w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920-300x169.jpg 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920-768x432.jpg 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920-610x343.jpg 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/ai-5202865_1920-750x422.jpg 750w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Image by <a href=\"https:\/\/pixabay.com\/pl\/users\/d5000-16677078\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5202865\" target=\"_blank\" class=\"broken_link\" rel=\"noopener\">Peter Pieras<\/a>&nbsp;from&nbsp;<a href=\"https:\/\/pixabay.com\/pl\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=5202865\" target=\"_blank\" class=\"broken_link\" rel=\"noopener\">Pixabay<\/a><\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cfd2df3 elementor-widget elementor-widget-heading\" data-id=\"cfd2df3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Are we holding AI to superhuman standards?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-fab7ba4 elementor-widget elementor-widget-text-editor\" data-id=\"fab7ba4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Whether we already trust AI, or we need a \u201ctrial period\u201d to get accustomed to it and start trusting it, it\u2019s no doubt that XAI is the direction that should guide AI development.<\/p>\n<p>Technology is quickly catching up with the increased demand for explainable AI and we are seeing different solutions that researches are proposing. There is already a new wave of AI explainability that introduces an attribution-based type of explainability that can be used for text, tabular, time series. However, the latest type that is being created is more of a generative kind, revealed Anders Arpteg.=<\/p><p>Instead of attributing the input data used for the decision, this type of explainable AI enables the model to explain itself in natural language (English or Swedish), explaining why it recommends a certain action referencing the general text it used as an input. With the latest breakthrough, we can just ask a question and get an answer from AI explaining itself in natural language.<\/p>\n<p>Another way of building more explainable models is by using proxy models, which are more explainable to mimic the behaviour of deep learning models, suggests Dr Shou-De Lin. Alternatively, he proposes building models to be more explainable by design by using fewer parameters in neural networks that may deliver similar accuracy with less complexity, therefore making the model more explainable.<\/p>\n<p>Others still are questioning if humans are not good at explaining themselves and if we fail the explainability test, can we put&nbsp;<a href=\"https:\/\/singularityhub.com\/2019\/03\/19\/to-be-ethical-ai-must-become-explainable-how-do-we-get-there\/\" target=\"_blank\" rel=\"noreferrer noopener\">explainability as a standard for AI<\/a>? But, at least as humans, we have the ability to do so, and we try our best to explain our choices, contrary to deep learning that can\u2019t do this yet. Therefore, experts suggest the direction deep learning, and AI in general, should head is work towards being able to identify which input data is triggering the systems to make the decision, however imperfect they are.<\/p>\n\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>As more business decisions are influenced by AI, it opens numerous questions and requirements for greater oversight, transparency and ethics in how those decisions are made leading to rising demand for explainable AI.<\/p>\n","protected":false},"author":950,"featured_media":18947,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[476,97,1419,1420,1421],"ppma_author":[3720],"class_list":["post-22687","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-ai-model","tag-artificial-intelligence","tag-explainable-ai","tag-transparency","tag-xai"],"authors":[{"term_id":3720,"user_id":950,"is_guest":0,"slug":"ivana-kotorchevikj","display_name":"Ivana Kotorchevikj","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/10\/Ivana-Kotorchevikj-150x150.jpg","user_url":"http:\/\/www.hyperight.com","last_name":"Kotorchevikj","first_name":"Ivana","job_title":"","description":"Ivana Kotorchevikj is Chief Editor, Hyperight Digital at Hyperight AB, an international event service provider focusing on creating network oriented and crowdsourced business events."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22687","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/950"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=22687"}],"version-history":[{"count":4,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22687\/revisions"}],"predecessor-version":[{"id":31892,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22687\/revisions\/31892"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/18947"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=22687"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=22687"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=22687"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=22687"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}