{"id":22750,"date":"2021-04-19T07:44:00","date_gmt":"2021-04-19T07:44:00","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/the-gpt-3-model-mean-chatbots-customer-service\/"},"modified":"2023-08-26T06:12:35","modified_gmt":"2023-08-26T06:12:35","slug":"the-gpt-3-model-mean-chatbots-customer-service","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/the-gpt-3-model-mean-chatbots-customer-service\/","title":{"rendered":"The GPT-3 Model: What Does It Mean For Chatbots And Customer Service?"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"22750\" class=\"elementor elementor-22750\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-2d17bde elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2d17bde\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5a19090\" data-id=\"5a19090\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-e2784cf elementor-widget elementor-widget-heading\" data-id=\"e2784cf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">What is GPT-3?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bdb0057 elementor-widget elementor-widget-text-editor\" data-id=\"bdb0057\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"986c\">In February 2019, the<a href=\"https:\/\/vanrijmenam.nl\/30-ways-how-ai-will-change-your-business\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">\u00a0artificial intelligence<\/a>\u00a0research lab OpenAI sent shockwaves through the world of computing by releasing the\u00a0<a href=\"https:\/\/openai.com\/blog\/better-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">GPT-2 language model<\/a>. Short for \u201cGenerative Pretrained Transformer 2,\u201d GPT-2 is able to generate several paragraphs of natural language text \u2014 often impressively realistic and internally coherent \u2014 based on a short prompt.<\/p>\n<p id=\"cf9f\">Scarcely a year later, OpenAI has already outdone itself with\u00a0<strong>GPT-3<\/strong>, a new generative language model that is bigger than GPT-2 by orders of magnitude. The largest version of the GPT-3 model has 175 billion parameters, more than 100 times the 1.5 billion parameters of GPT-2. (For reference, the number of neurons in the human brain is usually estimated as\u00a0<a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC2776484\/\" target=\"_blank\" rel=\"noreferrer noopener\">85 billion to 120 billion<\/a>, and the number of synapses is roughly\u00a0<a href=\"https:\/\/www.dana.org\/article\/qa-neurotransmission-the-synapse\/\" target=\"_blank\" rel=\"noreferrer noopener\">150 trillion<\/a>.)<\/p>\n<p id=\"7de4\">Just like its predecessor GPT-2, GPT-3 was trained on a simple task: given the previous words in a text, predict the next word. This required the model to consume very large datasets of Internet text, such as\u00a0<a href=\"https:\/\/commoncrawl.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Common Crawl<\/a>\u00a0and\u00a0<a href=\"https:\/\/dumps.wikimedia.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Wikipedia<\/a>, totalling 499 billion tokens (i.e. words and numbers).<\/p>\n<p id=\"cce9\">But how does GPT-3 work under the hood? Is it really a major step up from GPT-2? And what are the possible implications and applications of the GPT-3 model?<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e473931 elementor-widget elementor-widget-heading\" data-id=\"e473931\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">How Does GPT-3 Work?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-34ca7c7 elementor-widget elementor-widget-text-editor\" data-id=\"34ca7c7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"6d59\">Building GPT-3 required a monumental effort from OpenAI researchers. The details of the GPT-3 model are discussed in the May 2020 paper\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2005.14165\" target=\"_blank\" rel=\"noreferrer noopener\">\u201cLanguage Models are Few-Shot Learners,\u201d<\/a>\u00a0which is 74 pages long and has more than 30 authors. Chuan Li, chief science officer at Lambda Labs, estimates that it would cost\u00a0<a href=\"https:\/\/lambdalabs.com\/blog\/demystifying-gpt-3\/\" target=\"_blank\" rel=\"noreferrer noopener\">$4.6 million<\/a>, and take 355 years, to run a single training cycle for the GPT-3 model on an NVIDIA Tesla V100 GPU.<\/p>\n<p id=\"259b\">Perhaps the most impressive feat of GPT-3 is its ability to generate highly realistic sentences and paragraphs of text. GPT-3\u2019s text generation feature is beyond even the capabilities of GPT-2, which were already a major advancement in the field.<\/p>\n<p id=\"8165\">Based on a prompt containing a brief title and subtitle, the GPT-3 model is able to \u201cwrite\u201d a roughly 200-word news article that appears at least superficially plausible. Below is the GPT-3-generated article that human judges had the most difficulty distinguishing from reality:<\/p>\n<p id=\"f25d\"><strong><em>Title:<\/em><\/strong><em>&nbsp;United Methodists Agree to Historic Split<\/em><\/p>\n<p id=\"b148\"><strong><em>Subtitle:<\/em><\/strong><em>&nbsp;Those who oppose gay marriage will form their own denomination<\/em><\/p>\n<p id=\"cb0f\"><strong><em>Article:<\/em><\/strong><em>&nbsp;After two days of intense debate, the United Methodist Church has agreed to a historic split \u2014 one that is expected to end in the creation of a new denomination, one that will be \u201ctheologically and socially conservative,\u201d according to The Washington Post. The majority of delegates attending the church\u2019s annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will \u201cdiscipline\u201d clergy who officiate at same-sex weddings. But those who opposed these measures have a new plan: They say they will form a separate denomination by 2020, calling their church the Christian Methodist denomination.<\/em><\/p>\n<p id=\"25b4\">Note that this article is based on a kernel of truth: in January 2020, the United Methodist Church\u00a0<a href=\"https:\/\/www.cnn.com\/2020\/01\/17\/us\/united-methodist-church-split-christianity\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">proposed a split<\/a>\u00a0as a result of disagreements over LGBT issues such as same-sex marriage. This seeming verisimilitude was likely key to how this passage convinced so many judges. However, GPT-3\u2019s generated article gets a few notable facts wrong: the name of the new denomination has not been suggested, the proposal was not made at the church\u2019s General Conference, and the Washington Post citation is not based on a real quote.<\/p>\n<p id=\"990c\">Perhaps even more impressive, though, is GPT-3\u2019s performance on a number of common tasks in natural language processing. Even compared with GPT-2, GPT-3 represents a significant step forward for the NLP field. Remarkably, the GPT-3 model can demonstrate very high performance, even without any special training or fine-tuning for these tasks.<\/p>\n<p id=\"01c7\">For one, GPT-3 achieves very strong performance on\u00a0<a href=\"https:\/\/www.clozemaster.com\/blog\/cloze-test\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u201ccloze\u201d tests<\/a>, in which the model is tasked with filling in the blank words in a sentence. Given the sentence below, for example, most people would insert a word such as \u201cbat\u201d in the blank space:<\/p>\n<p id=\"ebd2\"><em>George bought some baseball equipment: a ball, a glove, and a _____.<\/em><\/p>\n<p id=\"b853\">The GPT-3 model can also easily adapt to new words introduced to its vocabulary. The example below demonstrates how, given a prompt that defines the new word, GPT-3 can generate a plausible sentence that even uses the word in past tense:<\/p>\n<p id=\"56a1\"><strong><em>Prompt:&nbsp;<\/em><\/strong><em>To \u201cscreeg\u201d something is to swing a sword at it. An example of a sentence that uses the word screeg is:<\/em><\/p>\n<p id=\"5ae3\"><strong><em>Answer:<\/em><\/strong><em>&nbsp;We screeghed at each other for several minutes and then we went outside and ate ice cream.<\/em><\/p>\n<p id=\"cb92\">Surprisingly, GPT-3 is also able to perform simple arithmetic with a high degree of accuracy, even without being trained for this task. With a simple question such as \u201cWhat is 48 plus 76?\u201d GPT-3 can supply the correct answer almost 100 per cent of the time with two-digit numbers, and roughly 80 per cent of the time with three-digit numbers.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-43c8193 elementor-widget elementor-widget-heading\" data-id=\"43c8193\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">What Does GPT-3 Mean, in General?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-36fd163 elementor-widget elementor-widget-text-editor\" data-id=\"36fd163\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"b02d\">In the weeks since the release of GPT-3, many experts have discussed the impact that the model might have on the state of deep learning, artificial intelligence, and NLP.<\/p>\n<p id=\"84fe\">First, GPT-3 demonstrates that it\u2019s not necessary to have a task-specific dataset, or to fine-tune the model\u2019s architecture, in order to achieve very good performance on specific tasks. For example, you don\u2019t need to train the model on millions of addition and subtraction problems in order to get the right answer to a math question. Essentially, GPT-3 achieved its strong results primarily through brute force, scaling up the model to an incredible size.<\/p>\n<p id=\"d0c0\">This approach has earned mixed reviews from analysts. According to UCLA assistant computer science professor Guy Van den Broeck, the GPT-3 model is analogous to\u00a0<a href=\"https:\/\/venturebeat.com\/2020\/06\/01\/ai-machine-learning-openai-gpt-3-size-isnt-everything\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">\u201csome oil-rich country being able to build a very tall skyscraper.\u201d<\/a>\u00a0While acknowledging the knowledge, skill, and effort required to build GPT-3, Van den Broeck claims that \u201cthere is no scientific advancement per se,\u201d and that the model will not \u201cfundamentally change progress in AI.\u201d<\/p>\n<p id=\"1379\">One issue is that the raw computing power required to train models like GPT-3 is simply out of reach for smaller companies and academia.\u00a0<a href=\"https:\/\/venturebeat.com\/2020\/06\/01\/ai-machine-learning-openai-gpt-3-size-isnt-everything\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Deep learning researcher Denny Britz<\/a>\u00a0compares GPT-3 to a particle collider in physics: a cutting-edge tool only accessible to a small group of people. However, Britz also suggests that the computing limitations of less well-endowed researchers will be a net positive for AI research, forcing them to think about\u00a0<em>why<\/em>\u00a0the model works and alternative techniques for achieving the same effects.<\/p>\n<p id=\"c88b\">Despite the impressive results, it\u2019s not entirely clear what\u2019s going on with GPT-3 under the hood. Has the model actually \u201clearned\u201d anything, or is it simply doing very high-level pattern matching for certain problems? The authors note that GPT-3 still exhibits notable weaknesses with tasks such as text synthesis and reading comprehension.<\/p>\n<p id=\"baf3\">What\u2019s more, is there a natural limit to the performance of models like GPT-3, no matter how large we scale them? The authors also briefly discuss this concern, mentioning the possibility that the model \u201cmay eventually run into (or could already be running into) the limits of the pretraining objective.\u201d In other words, brute force can only get you so far.<\/p>\n<p id=\"cb96\">Unless you have a few hundred spare GPUs lying around, the answer to these questions will have to wait until the presumed release of GPT-4 sometime next year.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6d91645 elementor-widget elementor-widget-heading\" data-id=\"6d91645\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">What Does GPT-3 Mean for Customer Service?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2727262 elementor-widget elementor-widget-text-editor\" data-id=\"2727262\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"c60a\">Although there\u2019s still much more to learn about how GPT-3 works, the release of the model has wide-ranging implications for a number of industries \u2014 in particular, chatbots and customer service. The ability of GPT-3 to generate paragraphs of seemingly realistic text should appeal to anyone interested in creating more convincing, \u201chuman-like\u201d AIs.<\/p>\n<p id=\"e104\">Tech companies have tried for years to build <a href=\"https:\/\/www.experfy.com\/blog\/ai-ml\/how-chatbots-created-storm-tech-world\/\" target=\"_blank\" rel=\"noreferrer noopener\">chatbots<\/a> that can effectively simulate conversations with their human interlocutors. Yet despite their best efforts, chatbots still aren\u2019t able to simulate the conversational fluency and knowledge of a real human being over a sustained period of time. According to a 2019 survey,\u00a0<a href=\"https:\/\/www.cgsinc.com\/en\/resources\/2019-CGS-Customer-Service-Chatbots-Channels-Survey\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">86 per cent of people<\/a>\u00a0prefer to speak with humans instead of chatbots, and 71 per cent say they would be less likely to use a brand if there were no human agents available.<\/p>\n<p id=\"ccc1\">Of course, GPT-3 was trained to generate articles and text, not to have a lifelike conversation. But there are indications that models like GPT-3 are approaching human-like language abilities \u2014 at least for shallow interactions, as would be involved in a\u00a0<a href=\"https:\/\/vanrijmenam.nl\/how-develop-conversational-ai-business\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">chatbot conversation<\/a>. The GPT-3 authors found that human judges could only identify the model\u2019s fake articles 52 per cent of the time, which is little better than chance.<\/p>\n<p id=\"10d4\">It\u2019s not only the realism of GPT-3, but also the advanced tasks it\u2019s able to perform, that differentiate it from the current field of chatbots. Many chatbots on companies\u2019 websites are simply intended as a\u00a0<a href=\"https:\/\/vanrijmenam.nl\/conversational-ai-change-customer-service\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">customer service<\/a>\u00a0quality filter, suggesting some common solutions for users before transferring them to a human agent if necessary.<\/p>\n<p id=\"708c\">Meanwhile, in terms of natural language processing, GPT-3 is much closer to an\u00a0<a href=\"https:\/\/www.mckinsey.com\/business-functions\/operations\/our-insights\/an-executive-primer-on-artificial-general-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">\u201cartificial general intelligence\u201d<\/a>\u00a0than any chatbot built thus far (although it\u2019s still far from a true AGI). It\u2019s conceivable that one day, highly advanced models like GPT-3 could parse users\u2019 complex queries and solve their problems automatically, without a human agent ever needing to step in.<\/p>\n<p id=\"bda8\">Furthermore,\u00a0<a href=\"https:\/\/discover.bot\/bot-talk\/human-like-chatbots-benefits-dangers-and-possibilities\/\" target=\"_blank\" rel=\"noreferrer noopener\">groundbreaking conversational AIs<\/a>\u00a0such as Google\u2019s\u00a0<a href=\"https:\/\/ai.googleblog.com\/2020\/01\/towards-conversational-agent-that-can.html\" target=\"_blank\" rel=\"noreferrer noopener\">Meena<\/a>\u00a0and Facebook\u2019s\u00a0<a href=\"https:\/\/ai.facebook.com\/blog\/state-of-the-art-open-source-chatbot\/\" target=\"_blank\" rel=\"noreferrer noopener\">BlenderBot<\/a>, both released in 2020, have also demonstrated that the \u201cbrute force\u201d approach is effective when applied specifically to chatbots. Meena and BlenderBot have 2.6 billion and 9.4 billion parameters, respectively, which are only tiny fractions of GPT-3\u2019s 175 billion. It may only be a matter of time before these models pass the Turing test by expanding to the scale of GPT-3, making them virtually indistinguishable from humans in short text conversations.<\/p>\n<p id=\"aee9\">OpenAI hasn\u2019t yet released the full model or source code for GPT-3, as they did gradually with GPT-2 last year. This puts GPT-3 out of reach for any companies interested in the model\u2019s practical applications (at least for now). But this isn\u2019t the last we\u2019ll hear about GPT-3 by a long shot. We live in exciting times \u2014 and whatever research comes next down the pipeline, it will be sure to advance our understanding of the capabilities (and limits) of AI.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>OpenAI has already outdone itself with GPT-3, a new generative language model that is bigger than GPT-2 by orders of magnitude. The largest version of the GPT-3 model has 175 billion parameters, more than 100 times the 1.5 billion parameters of GPT-2.<\/p>\n","protected":false},"author":550,"featured_media":19183,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97,843,1483,1506,214],"ppma_author":[3218],"class_list":["post-22750","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence","tag-chatbots","tag-customer-service","tag-gpt-3-model","tag-rpa"],"authors":[{"term_id":3218,"user_id":550,"is_guest":0,"slug":"mark-van-rijmenam","display_name":"Mark Rijmenam","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_6e38fd10-e332-41e3-98b2-8874278fc272-150x150.jpg","user_url":"https:\/\/datafloq.com\/","last_name":"Rijmenam","first_name":"Mark","job_title":"","description":"Dr Mark van Rijmenam is the founder of\u00a0<a href=\"https:\/\/datafloq.com\/\">Datafloq<\/a>\u00a0and\u00a0<a href=\"https:\/\/imagjn.com\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">Imagjn<\/a>.\u00a0He is a globally recognised speaker on big data, blockchain and AI, a strategist, influencer and author of\u00a0<a href=\"https:\/\/www.amazon.com\/s?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq&amp;i=stripbooks&amp;rh=p_27%3AVan+Rijmenam%2C+Mark&amp;s=relevancerank&amp;text=Van+Rijmenam%2C+Mark&amp;ref=dp_byline_sr_book_1\" target=\"_blank\" rel=\"noopener\">3 management books<\/a>. His last book,\u00a0<a href=\"https:\/\/vanrijmenam.nl\/the-organisation-of-tomorrow\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">The Organisation of Tomorrow<\/a>, discusses how AI, blockchain and analytics turn every business into a data organisation. He has been named a global thought leader on\u00a0<a href=\"https:\/\/onalytica.com\/blog\/posts\/big-data-2016-top-100-influencers-and-brands\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">big data<\/a>,\u00a0<a href=\"https:\/\/www.thinkers360.com\/top-50-global-thought-leaders-and-influencers-on-blockchain-november-2019\/\">blockchain<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.thinkers360.com\/top-20-global-thought-leaders-and-influencers-on-artificial-intelligence-september-2019\/\">artificial intelligence<\/a>. He holds a PhD in Management from the University of Technology, Sydney. He is a strategic advisor to several blockchain startups and publisher of\u00a0<a href=\"https:\/\/vanrijmenam.nl\/subscribe-to-newsletter\/?utm_source=datafloq&amp;utm_medium=ref&amp;utm_campaign=datafloq\" target=\"_blank\" rel=\"noopener\">the \u2018f(x) = ex\u2018 newsletter<\/a>."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22750","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/550"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=22750"}],"version-history":[{"count":4,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22750\/revisions"}],"predecessor-version":[{"id":31539,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22750\/revisions\/31539"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/19183"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=22750"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=22750"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=22750"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=22750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}