{"id":24850,"date":"2021-06-15T20:32:10","date_gmt":"2021-06-15T20:32:10","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/?p=24850"},"modified":"2023-08-19T12:16:01","modified_gmt":"2023-08-19T12:16:01","slug":"using-responsible-ai-to-teach-the-golden-rule","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/using-responsible-ai-to-teach-the-golden-rule\/","title":{"rendered":"Using Responsible AI To Teach The Golden Rule"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"24850\" class=\"elementor elementor-24850\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-86ee62d elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"86ee62d\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6f80caa\" data-id=\"6f80caa\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-5957adf elementor-widget elementor-widget-text-editor\" data-id=\"5957adf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Business leaders have a delicate balancing act when it comes to AI. On one hand, <a href=\"https:\/\/get.oreilly.com\/ind_ai-adoption-in-the-enterprise-2020.html\" target=\"_blank\" rel=\"noreferrer noopener\">according to O\u2019Reilly, 85% of executives across 25 industries are tasked with either evaluating or deploying AI<\/a>. On the other hand, risks and unintended consequences continue to grow, from <a href=\"https:\/\/nyupress.org\/9781479837243\/algorithms-of-oppression\/\" target=\"_blank\" rel=\"noreferrer noopener\">Google search results showing offensively skewed results for \u201cblack girls\u201d<\/a>, to questions about the insurance startup Lemonade\u2019s <a href=\"https:\/\/www.vox.com\/recode\/22455140\/lemonade-insurance-ai-twitter\" target=\"_blank\" rel=\"noreferrer noopener\">use of data to make predictions against specific religious groups<\/a>. AI is hard enough from the dimensions of business value, technical feasibility, and cybersecurity. Knowing how to navigate all those challenges and still build responsible AI systems would seem to be an impossible balancing act!<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-01e5cf5 elementor-widget elementor-widget-text-editor\" data-id=\"01e5cf5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Luckily, there\u2019s an easy way forward: focusing on the human angle. In this article we\u2019ll give business leaders a helpful primer on Responsible AI, through a combination of individual stories, technical risks, and best-practices moving forward. This article is by no means exhaustive, but it can give business managers a quick primer on where to get started.\u00a0<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-d490f44 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"d490f44\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-957e612\" data-id=\"957e612\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-8a8362e elementor-widget elementor-widget-heading\" data-id=\"8a8362e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Looking at Responsible AI through one person\u2019s journey<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-93fc1e6 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"93fc1e6\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-4f6066c\" data-id=\"4f6066c\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6a07ea7 elementor-widget elementor-widget-text-editor\" data-id=\"6a07ea7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>An easy place to start is with the film, \u201c<em>Coded Bias<\/em>\u201d, directed by Shalini Kantayya. This is a documentary which had a limited original release at the Sundance FIlm Festival in 2020, but it has now reached much wider distribution on Netflix. The film opens with the story of Joy Buolamwini, then a grad student at MIT, who discovered that facial recognition apps struggled to recognize faces of color. From there, it follows Joy\u2019s work and meets several leading voices calling attention to the risks of the misuse of AI, including Kathy O\u2019Neil, Meredith Broussard, Deb Raji, Timnit Gebru, Silkie Carlo, and many others. It\u2019s a 90-minute introduction to the key issues at the heart of responsible AI: <em>innovation without regulation can reinforce existing racial, economic, and social disparities.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e981088 elementor-widget elementor-widget-text-editor\" data-id=\"e981088\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>For a more technical view, there have been extensive papers published on the potential risks of language models in reinforcing harmful associations and stereotypes. A good place to explore is the recent paper, \u201c<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noreferrer noopener\">The Dangers of Stochastic Parrots<\/a>\u201d, which not only highlights the risks in the language models themselves, but also the phenomenon of people putting an undue amount of faith in machine-produced output, which is known as <em>automation bias<\/em>. This creates a risk in which people using language models in areas such as healthcare and cognitive therapy might give the programs a layer of trust that they have not earned.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-316b4e3 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"316b4e3\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5160a38\" data-id=\"5160a38\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0956c78 elementor-widget elementor-widget-heading\" data-id=\"0956c78\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Technical risks: Prediction vs Predetermination<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-df7dc00 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"df7dc00\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-d7c3b81\" data-id=\"d7c3b81\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-c829cd4 elementor-widget elementor-widget-text-editor\" data-id=\"c829cd4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Getting to a more practical level, business managers need to understand a few structural limitations that AI faces at this stage of its development. When used to make decisions in a business context, AI will only be as good as its training data. This creates the risk that AI predicts a future that looks like the past. This is most notable in a well-publicized example of an Amazon resume screening model that down-weighted<a href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G\" target=\"_blank\" rel=\"noreferrer noopener\"> female candidates because more engineers had historically been male<\/a>, and misidentified gender as being a predictive factor in what makes a \u2018good\u2019 engineering candidate. The model eventually ended up being scrapped. This is also the case with facial recognition models trained on predominantly white faces that fail to recognize faces of color, as documented in \u201c<em>Coded Bias<\/em>\u201d.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bed790f elementor-widget elementor-widget-text-editor\" data-id=\"bed790f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Another facet of this issue is that there can be patterns in the data that predetermine the outcome. A well known example of this is the controversy around the <a href=\"https:\/\/en.wikipedia.org\/wiki\/COMPAS_(software)\" target=\"_blank\" rel=\"noreferrer noopener\">COMPAS<\/a> model, which is used in several US states for prison inmates who are up for parole, and it is meant to predict the risk of these inmates committing another crime and needing to go back to prison. While the model did not include race as a variable, it included several socio-economic factors that were highly correlated with race, and gave those inputs a higher risk score. <a href=\"https:\/\/en.wikipedia.org\/wiki\/COMPAS_(software)\" target=\"_blank\" rel=\"noreferrer noopener\">An investigation of the model by ProPublica found that<\/a> as a result, \u201cthe model was nearly twice as likely to categorize black inmates who did not commit another crime as being high risk, while it was much more likely to categorize white inmates as low risk, even those who went on to commit additional crimes\u201d<em>. <\/em>Since then, a <a href=\"https:\/\/hdsr.mitpress.mit.edu\/pub\/hzwo7ax4\/release\/4\" target=\"_blank\" rel=\"noreferrer noopener\">widespread debate has kicked off<\/a> regarding the methodology around COMPAS and how the <a href=\"http:\/\/www.experfy.com\/blog\/ai-ml\/programming-fairness-in-algorithms\/\" target=\"_blank\" rel=\"noreferrer noopener\">concept of \u201cfairness\u201d can be applied to such models<\/a>.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-88dc867 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"88dc867\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6c0ce4f\" data-id=\"6c0ce4f\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a83717a elementor-widget elementor-widget-heading\" data-id=\"a83717a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Best practices for the Responsible AI future<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-2f6f847 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2f6f847\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-82c04e8\" data-id=\"82c04e8\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-3ea886b elementor-widget elementor-widget-text-editor\" data-id=\"3ea886b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Today, the industry stands at a crossroads, with a growing proliferation of tools, processes, and datasets emerging to help AI practitioners build systems with responsibility as a core component.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2c9bf64 elementor-widget elementor-widget-text-editor\" data-id=\"2c9bf64\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul>\n<li>Numerous frameworks have been created around key themes of <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai?activetab=pivot1%3aprimaryr6\" target=\"_blank\" rel=\"noreferrer noopener\">Fair, Accountable, Transparent, and Ethical<\/a> (FATE). Industry leaders, from Microsoft to Google to Salesforce to NVIDIA, are publishing their own frameworks for the <a href=\"https:\/\/blogs.nvidia.com\/blog\/2020\/03\/25\/salesforce-ethical-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">application of these principles<\/a>. On the business side, companies such as <a href=\"https:\/\/www.pwc.com\/gx\/en\/issues\/data-and-analytics\/artificial-intelligence\/what-is-responsible-ai.html\" target=\"_blank\" rel=\"noreferrer noopener\">PWC<\/a> and <a href=\"https:\/\/www.bcg.com\/publications\/2020\/six-steps-for-socially-responsible-artificial-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">BCG<\/a> have created their own frameworks to help decision-makers navigate this complex landscape.<\/li>\n<li>Explainability and Reproducibility: On a technical level, a proliferation of tools have emerged to ensure AI systems are built transparently. This includes everything from <a href=\"https:\/\/christophm.github.io\/interpretable-ml-book\/pixel-attribution.html\" target=\"_blank\" rel=\"noreferrer noopener\">python libraries using pixel attribution to explain deep learning computer vision<\/a> to ML platforms like <a href=\"https:\/\/www.wandb.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Weights and Biases<\/a> that help practitioners see where the \u2018signal\u2019 in their experiment is coming from across data sets, models, and hyperparameters.<\/li>\n<\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-73c0318 elementor-widget elementor-widget-text-editor\" data-id=\"73c0318\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul>\n<li>Data Sets: There are also labelled datasets to help with model training for equitable outcomes, from the <a href=\"https:\/\/papers.nips.cc\/paper\/2016\/file\/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Bolukbasi framework<\/a> for identifying gender bias in word embeddings to <a href=\"https:\/\/ai.facebook.com\/blog\/new-way-to-assess-ai-bias-in-object-recognition-systems\/\" target=\"_blank\" rel=\"noreferrer noopener\">Facebook\u2019s recent release of a dataset aiming to correct skin tone bias in facial recognition<\/a> by including racially diverse audiences in the training set.\u00a0<\/li>\n<\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-384bb5a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"384bb5a\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-09b5214\" data-id=\"09b5214\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d4ad3b7 elementor-widget elementor-widget-heading\" data-id=\"d4ad3b7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">In Conclusion<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-0da968d elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"0da968d\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-fc46c3d\" data-id=\"fc46c3d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-57cc740 elementor-widget elementor-widget-text-editor\" data-id=\"57cc740\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We should all remember that Responsible AI is more than a set of technical best practices; it\u2019s a commitment to human principles of empathy, responsibility, and equitable treatment. AI is still in its adolescence. But, just as we would do with a human child, we have the opportunity to teach it the golden rule: do unto others as you would have them do unto you.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Business leaders have a delicate balancing act when it comes to AI. On one hand, according to O\u2019Reilly, 85% of executives across 25 industries are tasked with either evaluating or deploying AI. On the other hand, risks and unintended consequences continue to grow, from Google search results showing offensively skewed results for \u201cblack girls\u201d, to<\/p>\n","protected":false},"author":1163,"featured_media":24921,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-post-2.php","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[3664],"class_list":["post-24850","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":3664,"user_id":1163,"is_guest":0,"slug":"alexander-lissstern-nyu-edu","display_name":"Alexander Liss","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/06\/Alexamnder-Liss-150x150.png","user_url":"","last_name":"Liss","first_name":"Alexander","job_title":"","description":"Alexander Liss is Vice President of Decision Science for HLK. On a day-to-day basis, he uses ML and analytics to turn business problems into opportunities for innovation and growth, and his long-term vision is to use AI &amp; ML for double-bottom line impact on both social good and business profit.\r\n\r\nAlex has a diverse background including majoring in Japanese literature, living in Japan for a few years, working in brand strategy, primary research, digital analytics, and now marketing sciences for large-scale, enterprise clients. He also serves as an advisor in the health-tech space, helping early-stage startups to develop solutions to use technology in responsible and socially positive ways.\r\n\r\nHe is currently refreshing his computer science fundamentals through Codeacademy and preparing to seek a PhD in Quantum Physics after his children go to college."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/24850","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/1163"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=24850"}],"version-history":[{"count":9,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/24850\/revisions"}],"predecessor-version":[{"id":30655,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/24850\/revisions\/30655"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/24921"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=24850"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=24850"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=24850"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=24850"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}