{"id":1411,"date":"2019-02-15T10:32:08","date_gmt":"2019-02-15T10:32:08","guid":{"rendered":"http:\/\/kusuaks7\/?p=1016"},"modified":"2023-06-28T16:50:46","modified_gmt":"2023-06-28T16:50:46","slug":"catch-me-if-you-can-a-simple-english-explanation-of-gans-or-dueling-neural-nets","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/catch-me-if-you-can-a-simple-english-explanation-of-gans-or-dueling-neural-nets\/","title":{"rendered":"Catch me if you can: A simple english explanation of GANs or Dueling neural-nets"},"content":{"rendered":"<p><strong><em>Ready to learn Artificial Intelligence? <a href=\"https:\/\/www.experfy.com\/training\/courses\">Browse courses<\/a>\u00a0like\u00a0 <a href=\"https:\/\/www.experfy.com\/training\/courses\/uncertain-knowledge-and-reasoning-in-artificial-intelligence\">Uncertain Knowledge and Reasoning in Artificial Intelligence<\/a> developed by industry thought leaders and Experfy in Harvard Innovation Lab.<\/em><\/strong><\/p>\n<section>\n<blockquote id=\"50f7\"><p>Deep learning GANs is one the biggest breakthrough technologies of 2018, as per MIT Technology Review\u2019s\u00a0<a href=\"https:\/\/www.technologyreview.com\/lists\/technologies\/2018\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.technologyreview.com\/lists\/technologies\/2018\/\" data->annual list<\/a>\u00a0of top 10\u00a0tech<\/p><\/blockquote>\n<p id=\"0b5a\"><strong><em>\u201cPractice makes perfect\u201d<\/em><\/strong><\/p>\n<p id=\"024e\">I\u2019m not so sure about humans, but anyone working on machine learning will agree that practice, or\u00a0<em>quality training data<\/em>\u00a0makes machines perfect. Well, almost.. but definitely by a huge margin, than us mortals.<\/p>\n<p id=\"9666\">A perfect AI implementation in any field is indistinguishable from magic. But the trouble is, as machines start learning, their hunger for data is insatiable\u2026 like\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Tantalus\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/en.wikipedia.org\/wiki\/Tantalus\" data->Tantalus<\/a>\u00a0from Greek myth, whose thirst &amp; hunger could never be fulfilled.<\/p>\n<p id=\"2a20\">A data scientist\u2019s days are spent in acquiring (and cleaning) more and more data to feed machines. And their nights are lost in teaching machines learning from all this data, by training models over and over again.<\/p>\n<p id=\"1c7c\">Severe shortcomings in both \u2018data\u2019 and \u2018training\u2019 take the A &amp; I out of m<strong>A<\/strong>g<strong>I<\/strong>c, making it meaningless. This, is the biggest bottleneck for AI\u2019s progress today.<\/p>\n<p id=\"f467\">But wait.. if machines could take up any human task, why not this one too?\u00a0<strong>Can we make machines learn-to-teach themselves?<\/strong>\u00a0No, this is not a play of words. And.. Yes, this is doable.<\/p>\n<figure id=\"c61c\"><canvas width=\"75\" height=\"40\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*q5Ee_uBSdgB-BQeM1MyFvw.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*q5Ee_uBSdgB-BQeM1MyFvw.jpeg\" \/><\/figure>\n<p style=\"text-align: center;\">(Pic source:\u00a0<a href=\"https:\/\/www.flickr.com\/photos\/stavos52093\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.flickr.com\/photos\/stavos52093\/\" data->stavos<\/a>\u00a0on\u00a0 flickr )<\/p>\n<h4 id=\"c5b8\">Duelling neural\u00a0networks<\/h4>\n<p id=\"07db\">Enter\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1406.2661.pdf\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/arxiv.org\/pdf\/1406.2661.pdf\" data->GANs<\/a>.. or its complex sounding expansion, Generative Adversarial Networks. If Deep learning is the next big thing that\u2019s taking the cake, GAN is the cream on that cake. The possibilities have never looked so exciting!<\/p>\n<p id=\"4a17\">But, firstly what is a GAN? We\u2019ll try and have rest of this conversation in simple English, without tossing in geeky jargons. So, a strict NO to stuff like \u2018<em>probabilities<\/em>\u2019, \u2018<em>perceptrons<\/em>\u2019, \u2018<em>activation<\/em>\u2019, \u2018<em>convolution<\/em>\u2019 and other gobbledegook.<\/p>\n<p id=\"8f93\">Let me tell you a story.<\/p>\n<h4 id=\"4da7\">Setting off the perfect cat-and-mouse game<\/h4>\n<p id=\"3bba\">Imagine a quintessential movie where two estranged brothers embrace opposing philosophies in life. One starts a fresh underworld operation printing fake currencies as a\u00a0<strong><em>\u2018manipulator\u2019<\/em><\/strong>, and the other enrols in a bureau to set up a new division that detects counterfeits as an\u00a0<strong><em>\u2018enforcer\u2019<\/em><\/strong>.<\/p>\n<p id=\"2b30\">To start with, lets say that the\u00a0<em>\u2018manipulator\u2019<\/em>\u00a0in underworld starts with a disadvantage of knowing nothing about what original currencies look like. The\u00a0<em>\u2018enforcer\u2019<\/em>\u00a0in the bureau knows just basics of how few real currencies look.<\/p>\n<p id=\"241e\">And then the game begins.<\/p>\n<p id=\"9c32\">The\u00a0<em>manipulator<\/em>\u00a0starts printing, but the initial fakes are terrible. It doesn\u2019t need even a trained eye to detect the counterfeits, and promptly every single one of them is detected by the<em>\u00a0enforcer<\/em>.<\/p>\n<p id=\"3710\">The\u00a0<em>manipulator<\/em>\u00a0is industrious and keeps churning out fakes, while also learning what didn\u2019t work in previous attempts. By sheer magnitude of experimentation with fakes &amp; some feedback, the quality of counterfeits slowly starts inching up (of course, assuming the operation is not shut down!)<\/p>\n<p id=\"ba39\">Eventually, the\u00a0<em>manipulator<\/em>\u00a0starts getting a few random counterfeits right and this goes undetected by the\u00a0<em>enforcer<\/em>. So, its learning time on the other side and the\u00a0<em>enforcer<\/em>\u00a0takes lessons on detecting these smarter counterfeits.<\/p>\n<p id=\"1436\">With the\u00a0<em>enforcer<\/em>\u00a0getting smarter, the counterfeits are detected again. The\u00a0<em>manipulator<\/em>\u00a0has no choice but to upgrade the counterfeiting operation to create more genuine-looking fakes.<\/p>\n<p id=\"b3d5\">This continuous game of cat-and-mouse continues, and ends up making experts out of both the\u00a0<em>manipulator<\/em>\u00a0and\u00a0<em>enforcer<\/em>. So much so that the counterfeits are indistinguishable from the genuine ones, and also the detection of such ingenious fakes becomes almost uncanny.<\/p>\n<p id=\"a156\">You get the drift. And this, is the underlying concept of GANs.<\/p>\n<figure id=\"fbe0\"><canvas width=\"75\" height=\"56\"><\/canvas><img decoding=\"async\" style=\"width: 640px; height: 480px;\" src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*O6-n9kCuJGa1wXzdEbPWgw.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*O6-n9kCuJGa1wXzdEbPWgw.jpeg\" \/><\/figure>\n<p style=\"text-align: center;\">Photos \u00a9Dreamworks<\/p>\n<h4 id=\"8213\">Generative Adversarial Network, in\u00a0context<\/h4>\n<p id=\"b426\">Lets now get our story and actors translated, in the context of GANs.<\/p>\n<figure id=\"0205\"><canvas width=\"75\" height=\"40\"><\/canvas><img decoding=\"async\" style=\"width: 640px; height: 364px;\" src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*7PGODnuL2tAc9bOinhtygQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*7PGODnuL2tAc9bOinhtygQ.jpeg\" \/><\/figure>\n<p id=\"22fa\" style=\"text-align: center;\">GANs \u2014 a schematic flow with the key\u00a0players<\/p>\n<p>Both the\u00a0<em>manipulator<\/em>\u00a0and\u00a0<em>enforcer<\/em>\u00a0are models, a variant of the Deep learning neural networks.<\/p>\n<p id=\"68ee\">The\u00a0<em>manipulator<\/em>\u00a0is called the\u00a0<em>\u2018<\/em><strong><em>Generator network<\/em><\/strong><em>\u2019<\/em>\u00a0which is tasked with the job of creating training data, starting randomly and getting as realistic as it can. The\u00a0<em>enforcer<\/em>\u00a0is the\u00a0<em>\u2018<\/em><strong><em>Discriminator network<\/em><\/strong><em>\u2019<\/em>, whose job is to detect and classify these as \u2018real\u2019 or \u2018fake\u2019, and become pretty good at it.<\/p>\n<p id=\"b9b8\">By pairing two models against each other as\u00a0<em>adversaries,<\/em>\u00a0we set them up for a\u00a0<em>healthy<\/em>\u00a0competition. Each tries mastering its own job across thousands of iterations, with no manual intervention. And voila, we end up with true-looking fakes and also a model that can detect most con-jobs.<\/p>\n<p id=\"a093\">And this is why GANs are such a master stroke in AI since they solve both the real-world problems of\u00a0<em>generating \u2018data\u2019\u00a0<\/em>when you don\u2019t have enough to start with, and\u00a0<em>\u2018training\u2019 models\u00a0<\/em>with no manual intervention, a form of unsupervised learning.<\/p>\n<p id=\"8e7a\">Atleast that\u2019s where they are headed, and they are already operational. Over the past couple of years, there have been steady advancement of GANs with hundreds of variants created, and many more innovations underway.<\/p>\n<blockquote id=\"65d2\"><p>Generative Adversarial Networks is the most interesting idea in the last ten years in machine learning.\u200a\u2014\u200aYann LeCun, Director, Facebook\u00a0AI<\/p><\/blockquote>\n<h4 id=\"6523\">Whats the utility of\u00a0GANs?<\/h4>\n<p id=\"ae5c\">What worldly good might come out of a perfect currency-printing machine or something conceptually similar? Apparently plenty, lets look at 3 broad areas.<\/p>\n<p id=\"d025\"><strong>1. Creative pursuits<\/strong><\/p>\n<p id=\"de40\">Its incredible to imagine that machines have finally unlocked their right brains. After all, who wouldn\u2019t be surprised when a nerdy programmer suddenly starts penning award-winning poetry.<\/p>\n<p id=\"2f18\">With a new-found approach to mimic real images, GANs have started creating imaginary celebrities, or new masterpieces that bear distinctive signature of artists. The potential usecases with this ability spans creative disciplines.<\/p>\n<figure id=\"81e5\"><canvas width=\"75\" height=\"37\"><\/canvas><img decoding=\"async\" style=\"width: 640px; height: 320px;\" src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*l_ytbMd1ikYcZdHajiVF0A.png\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*l_ytbMd1ikYcZdHajiVF0A.png\" \/><\/figure>\n<p style=\"text-align: center;\">Imaginary Celebrities: Nvidia GANS model generated images using Celeb faces\u00a0<a href=\"http:\/\/mmlab.ie.cuhk.edu.hk\/projects\/CelebA.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/mmlab.ie.cuhk.edu.hk\/projects\/CelebA.html\" data->dataset<\/a>\u00a0as reference . (<a href=\"http:\/\/research.nvidia.com\/sites\/default\/files\/pubs\/2017-10_Progressive-Growing-of\/karras2018iclr-paper.pdf\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/research.nvidia.com\/sites\/default\/files\/pubs\/2017-10_Progressive-Growing-of\/karras2018iclr-paper.pdf\" data->Paper<\/a>)<\/p>\n<p id=\"3564\"><strong>2. Translating text<\/strong><\/p>\n<p id=\"adbe\">Suppose you want to find out how a person would look without their glasses or with a new hairdo, you just ask to have this created. Not very different from asking for the day\u2019s weather or mapping your upcoming commute.<\/p>\n<p id=\"8d19\">By creating new flora and fauna to user specification of a short description, GANs have been dutifully granting the demands, just like a wish-fulfilling genie. Pity they couldn\u2019t breathe life into the creations.. atleast not yet.<\/p>\n<figure id=\"1e93\"><canvas width=\"75\" height=\"37\"><\/canvas><img decoding=\"async\" style=\"width: 640px; height: 323px;\" src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*26Xm_L1ElLl5UQuBEb6tng.png\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/640\/1*26Xm_L1ElLl5UQuBEb6tng.png\" \/><\/figure>\n<p style=\"text-align: center;\">Text to Image synthesis (Paper:\u00a0<a href=\"https:\/\/www.youtube.com\/redirect?q=https%3A%2F%2Farxiv.org%2Fabs%2F1612.03242&amp;redir_token=ex8VKrJg_hcrLx3weDwn99R842B8MTUyMTk1MDMzN0AxNTIxODYzOTM3&amp;v=rAbhypxs1qQ&amp;event=video_description\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.youtube.com\/redirect?q=https%3A%2F%2Farxiv.org%2Fabs%2F1612.03242&amp;redir_token=ex8VKrJg_hcrLx3weDwn99R842B8MTUyMTk1MDMzN0AxNTIxODYzOTM3&amp;v=rAbhypxs1qQ&amp;event=video_description\" data->https:\/\/arxiv.org\/abs\/1612.03242<\/a>)<\/p>\n<p id=\"cf79\"><strong>3. Generate training data<\/strong><\/p>\n<p id=\"0729\">GANs do the heavy lifting of creating tons of training data, which can shift AI into the fast-lane of progress. Imagine GANs spawning realistic 3D worlds similar to ours, with millions of miles of roads &amp; all possible traffic scenarios.<\/p>\n<p id=\"4f37\">Rather than a self-driving car or drone getting trained in the real-world and causing horrendous\u00a0<a href=\"https:\/\/www.theverge.com\/2018\/3\/26\/17166326\/uber-self-driving-autonomous-vehicle-ban-arizona-fatal-crash\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.theverge.com\/2018\/3\/26\/17166326\/uber-self-driving-autonomous-vehicle-ban-arizona-fatal-crash\" data->accidents<\/a>, they could get trained in these virtual worlds and become expert drivers. With GPU computing, this can be instantaneous.<\/p>\n<p style=\"text-align: center;\">\n<p style=\"text-align: center;\">\n<p id=\"e330\">While these are directional applications, GANs have already been applied to high-impact business applications like\u00a0<a href=\"https:\/\/www.eurekalert.org\/pub_releases\/2017-06\/imi-iml053117.php\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.eurekalert.org\/pub_releases\/2017-06\/imi-iml053117.php\" data->drug discovery<\/a>, and there are literally\u00a0<a href=\"https:\/\/github.com\/nashory\/gans-awesome-applications#face-aging\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/nashory\/gans-awesome-applications#face-aging\" data->hundreds<\/a>\u00a0of use cases in early stages of experimentation.<\/p>\n<p id=\"84e0\">While this might already sound revolutionary, the best in GANs is yet to come. The purpose of this article is to share a simple, inclusive tutorial for spreading awareness about this important technology. Now, let&#8217;s wait for the magic to be unfurled!<\/p>\n<\/section>\n<footer>\u00a0<\/footer>\n","protected":false},"excerpt":{"rendered":"<p>Severe shortcomings in both &lsquo;data&rsquo; and &lsquo;training&rsquo; take the A &amp; I out of mAgIc, making it meaningless. This is the biggest bottleneck for AI&rsquo;s progress today. But wait.. if machines could take up any human task, why not this one too?&nbsp;Can we make machines learn-to-teach themselves?&nbsp; Yes, this is doable. Enter&nbsp;GANs&#8230; Or its complex sounding expansion, Generative Adversarial Networks. If Deep learning is the next big thing that&rsquo;s taking the cake, GAN is the cream on that cake. The possibilities have never looked so exciting!<\/p>\n","protected":false},"author":315,"featured_media":3342,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[1994],"class_list":["post-1411","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":1994,"user_id":315,"is_guest":0,"slug":"ganes-kesari","display_name":"Ganes Kesari","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/Ganes_Kesari-150x150.jpeg","user_url":"http:\/\/gramener.com","last_name":"Kesari","first_name":"Ganes","job_title":"","description":"Ganes Kesari is the Co-founder and Chief Decision Scientist at <a href=\"https:\/\/gramener.com\/\">Gramener<\/a>, a data science company that helps organizations present data insights as stories. He advises executives on data-driven leadership and helps organizations adopt a culture of data for decision-making. He is a TEDx speaker and Contributor to Forbes and Entrepreneur. Find his latest work <a href=\"https:\/\/gkesari.com\/\">here<\/a> and reach out to him on  <a href=\"https:\/\/www.linkedin.com\/in\/gkesari\/\">LinkedIn<\/a>, where he shares insights regularly."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1411","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/315"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=1411"}],"version-history":[{"count":3,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1411\/revisions"}],"predecessor-version":[{"id":28924,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1411\/revisions\/28924"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/3342"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=1411"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=1411"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=1411"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=1411"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}