{"id":1380,"date":"2019-02-15T10:32:06","date_gmt":"2019-02-15T10:32:06","guid":{"rendered":"http:\/\/kusuaks7\/?p=985"},"modified":"2023-07-28T08:44:42","modified_gmt":"2023-07-28T08:44:42","slug":"the-limits-and-challenges-of-deep-learning","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/the-limits-and-challenges-of-deep-learning\/","title":{"rendered":"The limits and challenges of deep learning"},"content":{"rendered":"<p><strong><em>Ready to learn Machine Learning? <a href=\"https:\/\/www.experfy.com\/training\/courses\">Browse courses<\/a>\u00a0like\u00a0<a href=\"https:\/\/www.experfy.com\/training\/courses\/machine-learning-foundations-supervised-learning\">Machine Learning Foundations: Supervised Learning<\/a> developed by industry thought leaders and Experfy in Harvard Innovation Lab.<\/em><\/strong><\/p>\n<p style=\"text-align: center;\"><img decoding=\"async\" style=\"width: 696px; height: 392px;\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/neurons.jpg?resize=696%2C392&amp;ssl=1\" alt=\"neurons\" \/><\/p>\n<p>Deep learning, the spearhead of artificial intelligence, is perhaps one of the most exciting technologies of the decade. It has already made inroads in fields such as recognizing speech or detecting cancer, domains that were previously closed or scarcely available to traditional software models.<\/p>\n<p>Deep learning is often compared to the mechanisms that underlie the human mind, and some experts believe that it will continue to advance at an accelerating pace and conquer many more domains. In some cases, there\u2019s fear that deep learning might threaten the very social and economic fabrics that hold our societies together, by either driving humans into\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/03\/21\/artificial-intelligence-and-the-disruption-of-employment\/\" target=\"_blank\" rel=\"noopener noreferrer\">unemployment<\/a>\u00a0or\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/07\/28\/future-of-artificial-intelligence-ai-apocalypse\/\" target=\"_blank\" rel=\"noopener noreferrer\">slavery<\/a>.<\/p>\n<p>There\u2019s no doubt that machine learning and deep learning are super-efficient for many tasks. However, they\u2019re not a silver bullet that will solve all problems and override all previous technologies. Beyond the hype surrounding deep learning, in many cases, its distinct limits and challenges prevent it from competing with the mind of a human child.<\/p>\n<p>Here\u2019s an example. I played Mario Bros on the legendary NES console for the first time when I was six years old. That first experience, which didn\u2019t last more than a couple of hours, introduced me to the concept of platform games. After that, I could quickly apply the same rules to other games such as Prince of Persia, Sonic the Hedgehog, Crash Bandicoot and Donkey Kong Country. I also had no problem importing my previously earned knowledge to the 3D versions of those games when they made their appearance in the mid-90s.<\/p>\n<p>Meanwhile, I had the inherent power of porting my real-world experiences to the world of gaming. I immediately knew that I had to jump over pits, and that the plants that had sharp teeth would hurt Mario if he ran into them.<\/p>\n<p>By all standards, I was a person of average intelligence, and not even a very good gamer. This is the process that nearly every kid of my generation went through. However, even this simplest of feats proves to be a \u201cdeep\u201d challenge for deep learning algorithms. The smartest game-playing algorithms have to learn every new game from scratch.<\/p>\n<p>We humans can learn abstract, broad relationships between different concepts and make decisions with little information. In contrast, deep learning algorithms are narrow in their capabilities and need precise information\u2014lots of it\u2014to do their job.<\/p>\n<p>In a recent paper called \u201c<a href=\"https:\/\/arxiv.org\/abs\/1801.00631\" target=\"_blank\" rel=\"noopener noreferrer\">Deep Learning: A Critical Appraisal<\/a>,\u201d Gary Marcus, the former head of AI at Uber and a professor at New York University, details the limits and challenges that deep learning faces. While I suggest you read the entire paper, here\u2019s a brief summary of the important points he raised.<\/p>\n<h2>How deep learning works<\/h2>\n<p>Deep learning is a technique for classifying information through layered neural networks, a crude imitation of how the human brain works. Neural networks have a set of input units, where raw data is fed. This can be pictures, or sound samples, or written text. The inputs are then mapped to the output nodes, which determine the category to which the input information belongs. For instance, it can determine that the fed picture contains a cat, or that the small sound sample was the word \u201cHello.\u201d<\/p>\n<p>Deep neural networks, which power deep learning algorithms, have several hidden layers between the input and output nodes, making them capable of making much more complicated classifications of data.<\/p>\n<figure id=\"attachment_2115\"><img decoding=\"async\" style=\"width: 696px; height: 424px;\" src=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?resize=696%2C424&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?w=974&amp;ssl=1 974w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?resize=300%2C183&amp;ssl=1 300w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?resize=768%2C468&amp;ssl=1 768w, https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?resize=197%2C120&amp;ssl=1 197w\" alt=\"deep neural networks\" data-attachment-id=\"2115\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"deep neural networks\" data-large-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?fit=696%2C424&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?fit=300%2C183&amp;ssl=1\" data-orig-file=\"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/deep-neural-networks.png?fit=974%2C593&amp;ssl=1\" data-orig-size=\"974,593\" data-permalink=\"https:\/\/bdtechtalks.com\/2018\/02\/27\/limits-challenges-deep-learning-gary-marcus\/deep-neural-networks\/\" data-recalc-dims=\"1\" \/><figcaption>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Credit:\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1801.00631\" target=\"_blank\" rel=\"noopener noreferrer\">Gary Marcus<\/a><\/figcaption><\/figure>\n<p>Deep learning algorithms need to be trained with large sets of labelled data. This means that, for instance, you have to give it thousands of pictures of cats before it can start classifying new cat pictures with relative accuracy. The larger the training data set, the better the performance of the algorithm. Big tech companies are\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/06\/26\/how-amazons-acquisition-of-whole-foods-affects-the-data-battleground\/\" target=\"_blank\" rel=\"noopener noreferrer\">vying to amass more and more data<\/a>\u00a0and are willing to offer their services for free in exchange for access to user data. The more classified information they have, the better they\u2019ll be able to train their deep learning algorithms. This will in turn make their services more efficient than those of their competitors and bring them more customers (some of whom will pay for their premium services).<\/p>\n<h2>Deep learning is data hungry<\/h2>\n<p>\u201cIn a world with infinite data, and infinite computational resources, there might be little need for any other technique,\u201d Marcus says in his paper.<\/p>\n<p>And therein lies the problem, because we don\u2019t live in such a world.<\/p>\n<p>You can never give every possible labelled sample of a problem space to a deep learning algorithm. Therefore, it will have to generalize or interpolate between its previous samples in order to classify data it has never seen before such as a new image or sound that\u2019s not contained in its dataset.<\/p>\n<p>\u201cDeep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples,\u201d says Marcus.<\/p>\n<p style=\"text-align: center;\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2016\/04\/6914441342_775b4ab9a7_o.jpg?resize=640%2C480&amp;ssl=1\" alt=\"Big Data\" width=\"640\" height=\"480\" data-attachment-id=\"212\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Big Data\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2016\/04\/6914441342_775b4ab9a7_o.jpg?fit=640%2C480&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2016\/04\/6914441342_775b4ab9a7_o.jpg?fit=300%2C225&amp;ssl=1\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2016\/04\/6914441342_775b4ab9a7_o.jpg?fit=640%2C480&amp;ssl=1\" data-orig-size=\"640,480\" data-permalink=\"https:\/\/bdtechtalks.com\/2016\/04\/15\/the-role-of-big-data-in-securing-online-identities\/6914441342_775b4ab9a7_o\/\" data-recalc-dims=\"1\" \/><\/p>\n<p>So what happens when deep learning algorithm doesn\u2019t have enough quality training data? It can fail spectacularly, such as mistaking a\u00a0<a href=\"https:\/\/www.wired.com\/story\/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter\/\" target=\"_blank\" rel=\"noopener noreferrer\">rifle for a helicopter<\/a>, or\u00a0<a href=\"https:\/\/www.theverge.com\/2015\/7\/1\/8880363\/google-apologizes-photos-app-tags-two-black-people-gorillas\" target=\"_blank\" rel=\"noopener noreferrer\">humans for gorillas<\/a>.<\/p>\n<p>The heavy reliance on precise and abundance of data also makes deep learning algorithms vulnerable to spoofing. \u201cDeep learning systems are quite good at some large fraction of a given domain, yet easily fooled,\u201d Marcus says.<\/p>\n<p>Testament to the fact are many crazy stories such as deep learning algorithms mistaking\u00a0<a href=\"https:\/\/arstechnica.com\/cars\/2017\/09\/hacking-street-signs-with-stickers-could-confuse-self-driving-cars\/\" target=\"_blank\" rel=\"noopener noreferrer\">stop signs for speed limit signs<\/a>\u00a0with a little defacing, or British police software not being able to\u00a0<a href=\"https:\/\/gizmodo.com\/british-cops-want-to-use-ai-to-spot-porn-but-it-keeps-m-1821384511\" target=\"_blank\" rel=\"noopener noreferrer\">distinguish sand dunes from nudes<\/a>.<\/p>\n<h2>Deep learning is shallow<\/h2>\n<p>Another problem with deep learning algorithms is that they\u2019re very good at mapping inputs to outputs but not so much at understanding the context of the data they\u2019re handling. In fact, the word \u201cdeep\u201d in deep learning is much more a reference to the architecture of the technology and the number of hidden layers it contains rather than an allusion to its deep understanding of what it does. \u201cThe representations acquired by such networks don\u2019t, for example, naturally apply to abstract concepts like \u2018justice,\u2019 \u2018democracy\u2019 or \u2018meddling,\u2019\u201d Marcus says.<\/p>\n<p>Returning to the gaming example we visited at the beginning of the article, deep learning algorithms can become very good at playing games, and they can eventually beat the best human players in both video and board games. Just look at the crazy way this AI is playing Mario:<\/p>\n<p>However, this doesn\u2019t mean that the AI algorithm has the same understanding as humans in the different elements of the game. It has learned through trial and error that making those specific moves will prevent it from losing. For instance, if you look at\u00a0<a href=\"https:\/\/youtu.be\/qv6UVOQ0F44?t=2m26s\" target=\"_blank\" rel=\"noopener noreferrer\">2:26<\/a>\u00a0where a baseball is thrown at Mario, the most novice gamer would know that they would have to jump. But the AI keeps on running until it gets hit in the back.<\/p>\n<p>Moreover, if you give that same algorithm a new game, such as Super Mario 64 or Mega Man, it will have to learn everything anew.<\/p>\n<p>Marcus\u2019 paper refers to Google DeepMind\u2019s\u00a0<a href=\"http:\/\/www.wired.co.uk\/article\/google-deepmind-atari\" target=\"_blank\" rel=\"noopener noreferrer\">mastering of the Atari game Breakout<\/a>\u00a0as an example. According to the designers of the algorithm, the game realized after 240 minutes that the best way to beat the game is to dig a tunnel in the wall. But it neither knows what a tunnel or a wall is. It has just learned through millions of trials and errors that that specific way to play the game will yield the most points in the shortest time possible.<\/p>\n<h2>Deep learning is opaque<\/h2>\n<p>While decisions made by rule-based software can be traced back to the last if and else, the same can\u2019t be said about machine learning and deep learning algorithms. This lack of transparency in deep learning is what we call\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/02\/09\/scary-ai-blackbox\/\" target=\"_blank\" rel=\"noopener noreferrer\">the \u201cblack box\u201d problem<\/a>. Deep learning algorithms sift through millions of data points to find patterns and correlations that often go unnoticed to human experts. The decision they make based on these findings often confound even the engineers who created them.<\/p>\n<p>This might not be a problem when deep learning is performing a trivial task where a wrong decision will cause little or no damage. But when it\u2019s deciding the fate of a defendant in court or the medical treatment of a patient, mistakes can have more serious repercussions.<\/p>\n<p>\u201cThe transparency issue, as yet unsolved, is a potential liability when using deep learning for problem domains like financial trades or medical diagnosis, in which human users might like to understand how a given system made a given decision,\u201d Marcus says in his paper.<\/p>\n<p style=\"text-align: center;\"><img decoding=\"async\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=696%2C464&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?w=1920&amp;ssl=1 1920w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=300%2C200&amp;ssl=1 300w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=768%2C512&amp;ssl=1 768w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=1024%2C683&amp;ssl=1 1024w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=180%2C120&amp;ssl=1 180w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?resize=1140%2C760&amp;ssl=1 1140w, https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?w=1392&amp;ssl=1 1392w\" alt=\"fog road\" width=\"696\" height=\"464\" data-attachment-id=\"2116\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"fog road\" data-large-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?fit=696%2C464&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?fit=300%2C200&amp;ssl=1\" data-orig-file=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2018\/02\/fog-road.jpeg?fit=1920%2C1280&amp;ssl=1\" data-orig-size=\"1920,1280\" data-permalink=\"https:\/\/bdtechtalks.com\/2018\/02\/27\/limits-challenges-deep-learning-gary-marcus\/fog-road\/\" data-recalc-dims=\"1\" \/><\/p>\n<p>Marcus also points to algorithmic bias as\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/02\/21\/artificial-intelligence-machine-learning-problems-challenges\/\" target=\"_blank\" rel=\"noopener noreferrer\">one of the problems<\/a>\u00a0stemming from the opacity of deep learning algorithms. Machine learning algorithms often inherit the biases of the training data the ingest, such as preferring to show\u00a0<a href=\"https:\/\/www.washingtonpost.com\/news\/the-intersect\/wp\/2015\/07\/06\/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you\/?utm_term=.4e91879038ef\" target=\"_blank\" rel=\"noopener noreferrer\">higher paying job ads<\/a>\u00a0to men rather than women, or preferring white skin over dark in\u00a0<a href=\"https:\/\/www.theguardian.com\/technology\/2016\/sep\/08\/artificial-intelligence-beauty-contest-doesnt-like-black-people\" target=\"_blank\" rel=\"noopener noreferrer\">adjudicating beauty contests<\/a>. These problems are hard to debug in development phase and often result in controversial news headlines when the deep learning\u2013powered software go into production.<\/p>\n<h2>Is deep learning doomed to fail?<\/h2>\n<p>Certainly not. But it is bound for a reality check. \u201cIn general, deep learning is a perfectly fine way of optimizing a complex system for representing a mapping between inputs and outputs, given a sufficiently large data set,\u201d Marcus observes.<\/p>\n<p>Deep learning must be acknowledged for what it is, a highly efficient technique for solving classification problems, which will perform well when it has enough training data and a test set that closely resembles the training data set.<\/p>\n<p>But it\u2019s not a magic wand. If you don\u2019t have enough training data, or when your test data differs greatly from your training data, or when you\u2019re not solving a classification problem, then \u201cdeep learning becomes a square peg slammed into a round hole, a crude approximation when there must be a solution elsewhere,\u201d in Marcus\u2019 words.<\/p>\n<p>Marcus also suggests in his paper that deep learning has to be combined with other technologies such as plain-old rule-based programming and other AI techniques such as reinforcement learning. Other experts such as Starmind\u2019s Pascal Kaufmann propose neuroscience as the key to creating real AI that will be able to achieve\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/12\/21\/kaufmann-cracking-the-brain-code-creating-true-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener noreferrer\">human-like problem solving<\/a>.<\/p>\n<p>\u201cDeep learning is not likely to disappear, nor should it,\u201d Marcus says. \u201cBut five years into the field\u2019s resurgence seems like a good moment for a critical reflection, on what deep learning has and has not been able to achieve.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep learning, the spearhead of artificial intelligence, is perhaps one of the most exciting technologies of the decade.&nbsp;There&rsquo;s no doubt that machine learning and deep learning are super-efficient for many tasks. Deep learning is often compared to the mechanisms that underlie the human mind. However, they&rsquo;re not a silver bullet that will solve all problems and override all previous technologies. Beyond the hype surrounding deep learning, in many cases, its distinct limits and challenges prevent it from competing with the mind of a human child.<\/p>\n","protected":false},"author":109,"featured_media":3196,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[1946],"class_list":["post-1380","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":1946,"user_id":109,"is_guest":0,"slug":"ben-dickson","display_name":"Ben Dickson","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_8aaf6bea-c4c1-455f-8156-8007d70910f8-150x150.jpg","user_url":"https:\/\/bdtechtalks.com\/","last_name":"Dickson","first_name":"Ben","job_title":"","description":"Ben Dickson is an experienced software engineer and tech blogger. He contributes regularly to major tech websites such as the Next Web, the Daily Dot, PCMag.com, Cointelegraph, VentureBeat, International Business Times UK, and The Huffington Post."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1380","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=1380"}],"version-history":[{"count":3,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1380\/revisions"}],"predecessor-version":[{"id":29724,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1380\/revisions\/29724"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/3196"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=1380"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=1380"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=1380"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=1380"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}