{"id":26494,"date":"2021-09-21T17:46:54","date_gmt":"2021-09-21T17:46:54","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/?p=26494"},"modified":"2023-08-16T11:22:38","modified_gmt":"2023-08-16T11:22:38","slug":"review-of-attention-1","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/review-of-attention-1\/","title":{"rendered":"Review of Attention &#8211; 1"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"26494\" class=\"elementor elementor-26494\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-9f13906 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"9f13906\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-476d9aa\" data-id=\"476d9aa\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-dcb138f elementor-widget elementor-widget-text-editor\" data-id=\"dcb138f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Attention mechanisms have greatly helped accelerate the progress in deep learning [1] [2]. Although initially developed for language, Attention has become a popular technique in several domains such as Vision and RL. In this series of articles, we will briefly review the concept of attention and discuss some of the works that have applied attention in visual tasks.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-aff6d64 elementor-widget elementor-widget-text-editor\" data-id=\"aff6d64\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Intuition:<\/strong><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6f9ac37 elementor-widget elementor-widget-text-editor\" data-id=\"6f9ac37\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The ability of a species to quickly process information and respond to environmental changes greatly determines the chances of its survival [3] [4] [5]. Although our brain can perform complex tasks, it is limited by the amount of computation it can perform. In fact, our vision system is capable of processing less than 1% of the input data (<a href=\"https:\/\/en.wikipedia.org\/wiki\/Attention#cite_note-:0-3\" rel=\"noopener\">Wikipedia<\/a>). Our ability to pay attention to important aspects of the input data compensates for this bottleneck and helps us in performing complex tasks efficiently.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-850f3ca elementor-widget elementor-widget-text-editor\" data-id=\"850f3ca\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Inspired by the biological concept of attention, a similar technique is used in<a href=\"http:\/\/www.experfy.com\/blog\/ai-ml\/the-5-step-recipe-to-make-your-deep-learning-models-bug-free\/\" target=\"_blank\" rel=\"noreferrer noopener\"> deep learning models<\/a>. It enables the model to assign higher importance to certain aspects of the input signal while generating the output. The concept of attention has played an important role in designing models that have achieved or sometimes outperformed human-level performance in complex learning tasks in vision and language. In this article, we will review the concept of attention, specifically self-attention, and its adoption in visual tasks.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bf7a0c1 elementor-widget elementor-widget-text-editor\" data-id=\"bf7a0c1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Attention:<\/strong><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3f08d6e elementor-widget elementor-widget-text-editor\" data-id=\"3f08d6e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><a href=\"https:\/\/arxiv.org\/abs\/1409.0473\" rel=\"noopener\">Bahdanau et al<\/a>. introduced the attention mechanism for Neural Machine Translation (NMT)[6]. It enabled the RNN models to better represent long-range dependencies in the input sequence. Inspired by its success, variants of attention techniques were developed and successfully applied to language and image tasks. Refer to this excellent <a href=\"https:\/\/lilianweng.github.io\/lil-log\/2018\/06\/24\/attention-attention.html\" rel=\"noopener\">blog<\/a> by Lilian Weng for more details on the evolution of attention mechanisms [7].<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-85e6d51 elementor-widget elementor-widget-text-editor\" data-id=\"85e6d51\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Attention models usually learn the attention weights by aligning the input and output sequence representations. The technique is called self-attention if the same input sequence is used instead of the output sequence, for computing the alignment scores. <a href=\"https:\/\/arxiv.org\/abs\/1706.03762\" rel=\"noopener\">Vaswani et al.<\/a> used self-attention in the encoder of the transformer model to generate useful representations of the input sequences, which was later used for downstream tasks such as translation [8].<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8aa5551 elementor-widget elementor-widget-text-editor\" data-id=\"8aa5551\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>How does self-attention work?<\/strong><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-18bb394 elementor-widget elementor-widget-text-editor\" data-id=\"18bb394\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The attention mechanisms is analogous to how an information retrieval system works. The data in a database usually exists in the form of Key-Value pairs to enable faster retrieval. A query is compared against each key in the database and the value corresponding to the matching key is returned.\u00a0<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1353666 elementor-widget elementor-widget-text-editor\" data-id=\"1353666\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In self-attention, Query(Q), Key(K), and Value(V) are vectors obtained by applying transformations on the input sequence. The output of the attention layer is calculated as the weighted sum of the values. The weights(W) assigned to the values are calculated by comparing the query and the key vectors. The concept of self-attention is discussed in great detail <a href=\"https:\/\/jalammar.github.io\/illustrated-transformer\/\" rel=\"noopener\">here<\/a> [9].<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-811a892 elementor-widget elementor-widget-text-editor\" data-id=\"811a892\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><img decoding=\"async\" class=\"wp-image-26734 aligncenter\" src=\"http:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.14.34-PM.png\" alt=\"\" width=\"313\" height=\"126\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.14.34-PM.png 362w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.14.34-PM-300x121.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.14.34-PM-360x146.png 360w\" sizes=\"(max-width: 313px) 100vw, 313px\" \/><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8e364e0 elementor-widget elementor-widget-text-editor\" data-id=\"8e364e0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The compatibility scores are calculated as a dot-product between Query and Key vectors. The result is scaled by a factor of 1\/\u221ad<sub>k<\/sub> to limit the magnitude of the dot product from growing extremely large for large inputs. A softmax is applied to normalize the output values.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d99a115 elementor-widget elementor-widget-text-editor\" data-id=\"d99a115\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><img decoding=\"async\" width=\"446\" height=\"108\" class=\"wp-image-26731 aligncenter\" src=\"http:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.09.36-PM.png\" alt=\"\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.09.36-PM.png 446w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/Screen-Shot-2021-09-30-at-5.09.36-PM-300x73.png 300w\" sizes=\"(max-width: 446px) 100vw, 446px\" \/><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-071b181 elementor-widget elementor-widget-text-editor\" data-id=\"071b181\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Vaswani et al. discussed three major advantages of self-attention layers over traditional sequential networks:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b30f8e1 elementor-widget elementor-widget-text-editor\" data-id=\"b30f8e1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ol>\n<li>The computation complexity per layer is drastically reduced compared to the traditional sequential networks.<\/li>\n<li>Unlike RNNs, several computations can be parallelized in the attention layer, offering a way to scale them up for larger datasets.<\/li>\n<li>Due to its reduced complexity and parallelizable structure, the path length between the positions in input and output sequences is reduced. This helps the model to learn the long-range dependencies easier than the sequential networks where the signals\/cues could easily be lost while passing through several layers.<\/li>\n<\/ol>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-59c5ba8 elementor-widget elementor-widget-text-editor\" data-id=\"59c5ba8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Due to these advantages, attention-based models such as Transformers have clearly outperformed other architectures such as RNNs in various language tasks. Although they are particularly efficient in learning sequences (text), they have also been successful in the Image domain. We will review some of the approaches in vision in the following article and discuss the challenges in applying attention to Vision.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a33e3d0 elementor-widget elementor-widget-text-editor\" data-id=\"a33e3d0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[1] S. Chaudhari and R. Ramanath, \u201cAn Attentive Survey of Attention Models; An Attentive Survey of Attention Models,\u201d <em>ACM Trans. Intell. Syst. Technol. 1, 1, Artic.<\/em>, vol. 1, p. 33, 2021, doi: 10.1145\/3465055.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-07193e0 elementor-widget elementor-widget-text-editor\" data-id=\"07193e0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[2] A. de S. Correia and E. L. Colombini, \u201cAttention, please! A survey of Neural Attention Models in <a href=\"https:\/\/www.experfy.com\/experts\/experfy-talentclouds\/data-scientist-deep-learning?practice_area=ai-machine-learning\">Deep Learning<\/a>,\u201d Mar. 2021, Accessed: Sep. 19, 2021. [Online]. Available: https:\/\/arxiv.org\/abs\/2103.16775v1.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-74e2661 elementor-widget elementor-widget-text-editor\" data-id=\"74e2661\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[3] M. EM and B. JJ, \u201cThe evolution of intelligence: adaptive specializations versus general process,\u201d <em>Biol. Rev. Camb. Philos. Soc.<\/em>, vol. 76, no. 3, pp. 341\u2013364, 2001, DOI: 10.1017\/S146479310100570X.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7dddf9d elementor-widget elementor-widget-text-editor\" data-id=\"7dddf9d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[4] R. G and D. U, \u201cEvolution of the brain and intelligence in primates,\u201d <em>Prog. Brain Res.<\/em>, vol. 195, pp. 413\u2013430, 2012, DOI: 10.1016\/B978-0-444-53860-4.00020-9.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cf66767 elementor-widget elementor-widget-text-editor\" data-id=\"cf66767\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[5] M. A. Hofman, \u201cEvolution of the human brain: when bigger is better,\u201d <em>Front. Neuroanat.<\/em>, vol. 8, no. MAR, Mar. 2014, DOI: 10.3389\/FNANA.2014.00015.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e71c2b2 elementor-widget elementor-widget-text-editor\" data-id=\"e71c2b2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[6] D. Bahdanau, K. Cho, and Y. Bengio, \u201cNeural Machine Translation by Jointly Learning to Align and Translate,\u201d <em>3rd Int. Conf. Learn. Represent. ICLR 2015 &#8211; Conf. Track Proc.<\/em>, Sep. 2014, Accessed: Sep. 19, 2021. [Online]. Available: https:\/\/arxiv.org\/abs\/1409.0473v7.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-63f6ff2 elementor-widget elementor-widget-text-editor\" data-id=\"63f6ff2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[7] \u201cAttention? Attention!\u201d https:\/\/lilianweng.github.io\/lil-log\/2018\/06\/24\/attention-attention.html (accessed Sep. 19, 2021).<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-be30c6c elementor-widget elementor-widget-text-editor\" data-id=\"be30c6c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[8] A. Vaswani <em>et al.<\/em>, \u201cAttention is all you need,\u201d in <em>Advances in Neural Information Processing Systems<\/em>, 2017, vol. 2017-December, pp. 5999\u20136009.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-49dabda elementor-widget elementor-widget-text-editor\" data-id=\"49dabda\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>[9] \u201cThe Illustrated Transformer \u2013 Jay Alammar \u2013 Visualizing machine learning one concept at a time.\u201d https:\/\/jalammar.github.io\/illustrated-transformer\/ (accessed Sep. 19, 2021).<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Attention mechanisms have greatly helped accelerate the progress in deep learning [1] [2]. Although initially developed for language, Attention has become a popular technique in several domains such as Vision and RL. In this series of articles, we will briefly review the concept of attention and discuss some of the works that have applied attention<\/p>\n","protected":false},"author":1193,"featured_media":26495,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-post-2.php","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183,965],"tags":[97,94],"ppma_author":[4007],"class_list":["post-26494","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","category-ai-machine-learning","tag-artificial-intelligence","tag-data-science"],"authors":[{"term_id":4007,"user_id":1193,"is_guest":0,"slug":"raghav","display_name":"Raghavendran","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/09\/1577746037422-150x150.jpeg","user_url":"","last_name":"Ramakrishnan","first_name":"Raghavendran","job_title":"","description":"Raghavendran is a Machine Learning Engineer at IQVIA. He completed his Masters from Arizona State University. He is passionate about deep learning and particularly interested in understanding the role of vision and language in learning a task."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/26494","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/1193"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=26494"}],"version-history":[{"count":9,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/26494\/revisions"}],"predecessor-version":[{"id":30471,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/26494\/revisions\/30471"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/26495"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=26494"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=26494"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=26494"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=26494"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}