{"id":1688,"date":"2019-05-09T03:23:29","date_gmt":"2019-05-09T03:23:29","guid":{"rendered":"http:\/\/kusuaks7\/?p=1293"},"modified":"2023-07-05T11:19:43","modified_gmt":"2023-07-05T11:19:43","slug":"why-ai-assistants-cant-be-robots-for-now","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/why-ai-assistants-cant-be-robots-for-now\/","title":{"rendered":"Why AI assistants can\u2019t be robots (for now)"},"content":{"rendered":"<p>\u201cAlexa, are you ready to have a body?\u201d<\/p>\n<p>Steady advances in artificial intelligence and\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/02\/20\/ai-machine-learning-nlg-nlp\/\" rel=\"noopener\">natural language <\/a>processing have\u00a0made digital assistants such as Amazon\u2019s Alexa increasingly capable of performing complicated voice commands under different circumstances.<\/p>\n<p>But does it mean that our digital assistants are ready to escape the confines of smartphones, smart speakers and computers (and a\u00a0<a href=\"https:\/\/www.theverge.com\/2019\/1\/6\/18170575\/kohler-konnect-bathroom-smart-gadgets-numi-intelligent-toilet-ces-2019\" target=\"_blank\" rel=\"noopener noreferrer\">bunch of weird gadgets<\/a>)?<\/p>\n<p>\u201cThe only way to make smart assistants really smart is to give it eyes and let it explore the world,\u201d Rohit Prasad, head scientist of the Alexa artificial intelligence group at Amazon, recently said at the MIT Technology Review\u2019s\u00a0<a href=\"https:\/\/www.technologyreview.com\/s\/613199\/alexa-needs-a-robot-body-to-escape-the-confines-of-todays-ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">EmTech Digital conference<\/a>.<\/p>\n<p>Prasad didn\u2019t explicitly say what it means to \u201cgive [Alexa] eyes and let it explore the world,\u201d the statement strongly hints at an Alexa-powered robot (at least that\u2019s how MIT Tech Review has interpreted his words). While the idea of putting a face on the voices of Alexa, Siri and Cortana sounds appealing, the truth is that with today\u2019s AI technology, such an idea is doomed to fail.<\/p>\n<h2>The failure of robot projects<\/h2>\n<p>Jibo, the \u201cfirst social robot for the home,\u201d\u00a0<a href=\"http:\/\/he%20world%27s%20first%20social%20robot%20for%20the%20home\/\" target=\"_blank\" rel=\"noopener noreferrer\">recently shut down<\/a>. Mayfield Robotics, the manufacturer of the Kuri home robot,\u00a0<a href=\"https:\/\/www.theverge.com\/circuitbreaker\/2018\/8\/21\/17765330\/mayfield-robotics-kuri-robot-shutting-down\" target=\"_blank\" rel=\"noopener noreferrer\">shut down in August<\/a>. In October, Boston-based Rethink Robotics had to\u00a0<a href=\"https:\/\/www.bostonglobe.com\/business\/2018\/10\/03\/robot-pioneer-rethink-shuts-down\/NlzXXX6NimgyDZYao0TlfO\/story.html\" target=\"_blank\" rel=\"noopener noreferrer\">close shop<\/a>\u00a0because they couldn\u2019t find a working business model for their famous Baxtor and Sawyer robots.<\/p>\n<p>Boston Dynamics, the company that became famous with the YouTube videos of its robots performing incredible feats, rarely shows the human operators who are controlling and guiding its robots.\u00a0<a href=\"https:\/\/d.docs.live.net\/70124ca7b3f5654b\/Articles\/google%20acquires%20boston%20dynamics\" target=\"_blank\" rel=\"noopener noreferrer\">Google acquired Boston Dynamics in 2013<\/a>, but then\u00a0<a href=\"https:\/\/www.businessinsider.com\/why-softbank-bought-boston-dynamics-google-alphabet-robots-2017-6\" target=\"_blank\" rel=\"noopener noreferrer\">sold it to Japanese tech giant <\/a>SoftBank in\u00a02017 because it didn\u2019t fit in its strategy. Boston Dynamics is still\u00a0<a href=\"https:\/\/www.nytimes.com\/2018\/09\/22\/technology\/boston-dynamics-robots.html\" target=\"_blank\" rel=\"noopener noreferrer\">struggling to find real-world problems to solve<\/a>\u00a0with its robots.<\/p>\n<p>These are just a few of a string of failed robot projects, with probably more to come. To be clear, Alexa is backed by one of the largest and richest tech companies in the world. Amazon sits on a wealth of data, money, and experience in creating tech products. But will Amazon\u2019s virtually limitless resources be enough to overcome the challenges of creating an Alexa-backed robot?<\/p>\n<h2>The navigation challenges of robots<\/h2>\n<figure id=\"attachment_4144\" aria-describedby=\"caption-attachment-4144\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=696%2C464&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?w=3000&amp;ssl=1 3000w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=300%2C200&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=768%2C512&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=1024%2C683&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=696%2C464&amp;ssl=1 696w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=1068%2C712&amp;ssl=1 1068w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?resize=630%2C420&amp;ssl=1 630w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?w=1392&amp;ssl=1 1392w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?w=2088&amp;ssl=1 2088w\" alt=\"Robot arm explainable AI\" width=\"696\" height=\"464\" data-attachment-id=\"4144\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;Close up of robotic arm with red question mark against modern examing room 3d&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;Composite image of close up of robotic arm with red question mark 3d&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"Robot arm explainable AI\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?fit=696%2C464&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?fit=300%2C200&amp;ssl=1\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/01\/explaianble-AI-robot-arm.jpg?fit=3000%2C2000&amp;ssl=1\" data-orig-size=\"3000,2000\" data-permalink=\"https:\/\/bdtechtalks.com\/composite-image-of-close-up-of-robotic-arm-with-red-question-mark-3d\/\" data-recalc-dims=\"1\" \/><\/figure>\n<p style=\"text-align: center;\">Source:\u00a0Depositphotos<\/p>\n<p>Teaching robots to navigate open environments is very difficult, even when equipped with the most advanced AI technologies. Any number of things can happen, and unless the AI powering the robot has an abstract and high-level knowledge of the world, it won\u2019t be able to carry out its tasks without the help of humans.<\/p>\n<p>That is\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/02\/27\/limits-challenges-deep-learning-gary-marcus\/\" rel=\"noopener\">exactly what contemporary AI lacks<\/a>.<\/p>\n<p>Robots and self-driving cars use\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/01\/14\/what-is-computer-vision\/\" rel=\"noopener\">computer vision<\/a>\u00a0to analyze their surroundings and navigate the world. Computer vision is the science that tries to replicate the workings of the human vision system and helps software make sense of the content of images and video.<\/p>\n<p>At the moment, the most popular AI technique used in computer vision is\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/02\/15\/what-is-deep-learning-neural-networks\/\" rel=\"noopener\">deep learning<\/a>. Deep learning algorithms ingest a huge number of examples to develop their behavior. For instance, a deep learning model that wants to help a robot navigate homes will have to see videos and pictures of different room types, different decorations, furniture, tables, carpets\u2026 to know how to find its way around different obstacles.<\/p>\n<p>Even when trained with millions of samples, a deep learning model will not have a general understanding of what a room is, why there\u2019s a table in the kitchen, why there are chairs around tables, etc. It will just have a statistical knowledge of the type of images it should see around a house, which ones it can go over, which ones it needs to avoid, and so on.<\/p>\n<p>If the robot faces a new setting, or a new object or a new color composition it has never seen before, its AI will not know what to do and will act in an erratic manner. A short-term fix is to just throw more data at the problem and continue to train the AI models with all sorts of new kinds of samples.<\/p>\n<p>Amazon sits on a vast sea of data that might be able to help train the Alexa robot\u2019s AI algorithms. It can also tap into the vast resources of its Mechanical Turk platform to crowdsource some of the training work. But that will not solve the problem of giving AI a general understanding of the world, objects and relations between them.<\/p>\n<p>Without that general understanding, even the most sophisticated AI model run into \u201cedge cases,\u201d scenarios that the AI has not been trained for. This is why it\u2019s so hard to design\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/09\/17\/self-driving-cars-ai-computer-vision\/\" rel=\"noopener\">robots and self-driving cars that can navigate open environments<\/a>.<\/p>\n<p>Some companies use complementary technologies such as sensors, radars and lidars to enable robots to map their surroundings. These hardware additions reduce error rates (and raise the costs). But even a perfect 3D mapping of the surrounding can cause errors if the AI doesn\u2019t have a logical understanding of its environment.<\/p>\n<p>Alexa will be facing an even bigger problem if it wants to handle objects as well as navigate environments. Robots have historically been bad at handling objects, unless in a very controlled environment. In recent years, companies have used advanced AI techniques such as reinforcement learning to train robot hands to carry out different tasks by themselves. But such methods require\u00a0<a href=\"https:\/\/www.technologyreview.com\/s\/611724\/artificial-intelligence-driven-robot-hand-spends-a-hundred-years-teaching-itself-to-rotate\/\" target=\"_blank\" rel=\"noopener noreferrer\">massive amounts of data and compute resources<\/a>(again something that Amazon has in abundance) and have yet to fulfill real-world use cases.<\/p>\n<h2>The challenges of interacting with AI assistants<\/h2>\n<p style=\"text-align: center;\"><img decoding=\"async\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=696%2C464&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?w=4800&amp;ssl=1 4800w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=300%2C200&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=768%2C512&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=1024%2C683&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=696%2C464&amp;ssl=1 696w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=1068%2C712&amp;ssl=1 1068w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?resize=630%2C420&amp;ssl=1 630w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?w=1392&amp;ssl=1 1392w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?w=2088&amp;ssl=1 2088w\" alt=\"Sound wave illustration\" width=\"696\" height=\"464\" data-attachment-id=\"4646\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;Equalizer sound wave background theme. Colour illustration.&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;Sound wave illustration&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"Sound wave illustration\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?fit=696%2C464&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?fit=300%2C200&amp;ssl=1\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/voice-command.jpg?fit=4800%2C3200&amp;ssl=1\" data-orig-size=\"4800,3200\" data-permalink=\"https:\/\/bdtechtalks.com\/2019\/04\/04\/vui-consumer-experience\/sound-wave-illustration\/\" data-recalc-dims=\"1\" \/><\/p>\n<p>Now let\u2019s say Amazon manages to create an Alexa robot that can \u201cexplore the world\u201d and has an AI that can navigate different environments with acceptable accuracy most of the time, and only makes stupid mistakes every now and then.<\/p>\n<p>The next question will be, what should this robot do?<\/p>\n<p>Right now, Alexa has tens of thousands of skills, but most of them are simple tasks such as playing music, answering queries, and interacting with smart home devices. These are the kind of things you could expect from an inanimate object sitting on your table.<\/p>\n<p>But our expectations will certainly change when Alexa escapes the shell of the Echo smart speaker and finds its own body. We will expect our AI assistant to manifest human-like behavior and intelligence. We will expect them to have many of the cognitive skills that we take for granted.<\/p>\n<p>To be clear,\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/09\/03\/challenges-of-smart-speakers-ai-assistants\/\" rel=\"noopener\">AI assistants are already struggling<\/a>\u00a0to perform tasks that require multiple steps. Some of those problems are due to the limits of a voice-only interface. For example, smart speakers are very limited in helping users browse and choose between different options when making a choice. They\u2019re also not very good at going back and forth between multiple steps. That\u2019s why tasks like playing music and setting timers remain the more popular use cases for smart speakers.<\/p>\n<p>But the bigger problem of digital assistants are\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/10\/22\/ai-deep-learning-human-language\/\" rel=\"noopener\">the limits of contemporary AI in understanding and processing human language<\/a>. Advances in deep learning and neural networks have created breakthroughs in automated speech recognition and natural language processing. AI is now better than ever in transforming speech to text and mapping text to commands.<\/p>\n<p>But AI is still struggling to understand the context and meaning of words. At the heart of the most complicated language processing AI algorithms is still statistics. Your smart speaker will be able to respond to different variations of \u201cWhat is the weather tomorrow?\u201d \u201cHow\u2019s the weather on Monday?\u201d and \u201cWill it rain next week?\u201d But that is only because it has seen thousands of similar sentences and the corresponding function they must perform. It has no understanding of the concepts of weather, rain and weekday.<\/p>\n<p>That\u2019s why if you suddenly become distracted in the middle of a voice command to your AI assistant and say, \u201cAlexa, how\u2019s the weather on\u2026 umm\u2026 let me see\u2026 \u00a0Monday\u2014no wait, Tuesday?\u201d your smart speaker will not be able to respond. But for a human, it would be a no-brainer.<\/p>\n<p>Give Alexa a body, limbs and eyes to \u201cexperience\u201d the world, and maybe it\u2019ll be able to remove some of the confusion from the user experience. But the language understanding problem will not go away. Meanwhile, we have a tendency to\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/01\/02\/humanizing-ai-deep-learning-alphazero\/\" rel=\"noopener\">anthropomorphize anything that scantly behaves or looks like humans<\/a>. That means our expectations of the AI assistant will only increase when they enter their robot shells, especially since we\u2019ll be forking over a larger sum to purchase them.<\/p>\n<p>But what\u2019s clear is that there\u2019s a\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/08\/21\/artificial-intelligence-vs-human-mind-brain\/\" rel=\"noopener\">stark difference between AI and human intelligence<\/a>, and no matter how human-like Alexa will be, it will not be able to fulfill our expectations.<\/p>\n<h2>What\u2019s the optimal use for AI assistants?<\/h2>\n<p>Maybe someday, scientists will be able to crack the code of\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/05\/12\/what-is-narrow-general-and-super-artificial-intelligence\/\" rel=\"noopener\">artificial general intelligence<\/a>\u00a0(AGI), the kind of AI that will be able to think like humans, without requiring huge amounts of examples and a ton of computing power (not everyone is a fan of AGI). Deep learning, machine learning and other AI technologies we currently have are considered narrow artificial intelligence, which means they can perform one specific task very well, but aren\u2019t very good at general problem-solving or carrying their knowledge to other domains.<\/p>\n<p>Until such time (if that time ever comes) that human kind manages to create general AI, we\u2019ll have to find ways to put our digital assistants to efficient use. And key to that will be to recognize the limits of artificial intelligence and\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/04\/23\/strong-ai-vs-weak-ai-deep-learning\/\" rel=\"noopener\">focus on putting narrow AI to good use<\/a>.<\/p>\n<p>What does this mean for digital assistants like Alexa, Siri and Cortana? Here are two scenarios that work best with current AI technology.<\/p>\n<h2>The narrow AI approach<\/h2>\n<p>The proposition of having an Alexa robot is something that will test the limits of AI. It would sound like a single AI-powered device that can perform thousands of tasks. The owner of the robot would have no way of knowing what the device can and can\u2019t do. There\u2019s a lot of ground for confusion and errors.<\/p>\n<p>The narrow AI approach is to have multiple Alexa-powered devices that can perform specific tasks. This is something that Amazon has already tested successfully. When speaking to a light bulb or a\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/11\/19\/amazon-alexa-natural-language-context\/\" rel=\"noopener\">microwave oven<\/a>, you have a pretty clear idea of what you can and can\u2019t say to it. The idea would be to see more AI-powered gadgets in homes, offices and cars.<\/p>\n<p>Instead of a physically present robot, Alexa would be an omnipresent AI assistant that would be incorporated into all of your devices and would be able to take and execute commands to each specific device.<\/p>\n<p>From a functional standpoint, this approach would work within the boundaries of current AI technology. But it isn\u2019t a perfect solution. At the very least, AI-powered devices would entail\u00a0<a href=\"https:\/\/bdtechtalks.com\/2016\/08\/26\/machine-learning-has-a-privacy-problem\/\" rel=\"noopener\">privacy concerns<\/a>, especially since tech giants don\u2019t have a brilliant record when it comes to making responsible use of customer data.<\/p>\n<h2>The augmented intelligence approach<\/h2>\n<p>An alternative way to think about AI, which has become popular in the past few years, is to consider it as a complement and not a replacement to human intelligence and cognitive efforts. Known as\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/12\/04\/what-is-the-difference-between-ai-and-augmented-intelligence\/\" rel=\"noopener\">augmented intelligence<\/a>, this approach looks for ways AI can help humans better perform tasks by automating some of the steps, not the entire process.<\/p>\n<p>One of the areas where AI assistants can perform augmented intelligence is AR headsets. When using augmented reality headsets, users don\u2019t have access to rich user interfaces to interact with applications.\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/08\/13\/augmented-reality-artificial-intelligence-assistants\/\" rel=\"noopener\">This is where a voice enabled AI assistant can help a lot<\/a>\u00a0by relieving the cognitive burden from the user. For instance, users can query for information while using the headset.<\/p>\n<p>AR headsets also enable better cooperation between humans and AI. Instead of exploring the world for itself, the AI assistant would be able to view it\u00a0<a href=\"https:\/\/bdtechtalks.com\/2017\/01\/05\/what-is-eye-tracking-technology\/\" rel=\"noopener\">through the eyes of the user<\/a>\u00a0and better interact with the surrounding world and respond to commands.<\/p>\n<p>Magic Leap, the company behind the famous namesake mixed reality headset, is contemplating\u00a0<a href=\"https:\/\/www.theverge.com\/2018\/8\/8\/17662040\/magic-leap-one-creator-edition-preview-mixed-reality-glasses-launch\" target=\"_blank\" rel=\"noopener noreferrer\">creating AI assistants to go with its devices<\/a>.<\/p>\n<h2>The robots are not coming\u2014yet<\/h2>\n<p style=\"text-align: center;\"><img decoding=\"async\" src=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=696%2C467&amp;ssl=1\" sizes=\"(max-width: 696px) 100vw, 696px\" srcset=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?w=2181&amp;ssl=1 2181w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=300%2C201&amp;ssl=1 300w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=768%2C516&amp;ssl=1 768w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=1024%2C687&amp;ssl=1 1024w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=696%2C467&amp;ssl=1 696w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=1068%2C717&amp;ssl=1 1068w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?resize=626%2C420&amp;ssl=1 626w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?w=1392&amp;ssl=1 1392w, https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?w=2088&amp;ssl=1 2088w\" alt=\"robot\" width=\"696\" height=\"467\" data-attachment-id=\"4668\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta=\"{&quot;aperture&quot;:&quot;8&quot;,&quot;credit&quot;:&quot;charles taylor&quot;,&quot;camera&quot;:&quot;Canon EOS 7D&quot;,&quot;caption&quot;:&quot;retro robot toy group&quot;,&quot;created_timestamp&quot;:&quot;1312313189&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;20&quot;,&quot;iso&quot;:&quot;100&quot;,&quot;shutter_speed&quot;:&quot;0.025&quot;,&quot;title&quot;:&quot;robot&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"robot\" data-large-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?fit=696%2C467&amp;ssl=1\" data-lazy-loaded=\"1\" data-medium-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?fit=300%2C201&amp;ssl=1\" data-orig-file=\"https:\/\/i2.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/04\/Robots.jpg?fit=2181%2C1464&amp;ssl=1\" data-orig-size=\"2181,1464\" data-permalink=\"https:\/\/bdtechtalks.com\/robot\/\" data-recalc-dims=\"1\" \/><\/p>\n<p>We humans like to take cues from nature when we want to invent new things. But experience and history shows that we usually end up taking a different course: Planes fly, but they don\u2019t flap their wings, and cars look nothing like horses.<\/p>\n<p>Thinking about human-like robots is nice, but we must also acknowledge that replicating all the functionalities of the human brain, which is perhaps the most complex creation of nature,\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/03\/25\/richard-sutton-artificial-intelligence-research\/\" rel=\"noopener\">is all but impossible<\/a>. So Alexa and other digital assistants will find new ways to make our lives easier, but they may never have their own human-like bodies.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Steady advances in artificial intelligence and&nbsp;natural language processing have made digital assistants increasingly capable of performing complicated voice commands under different circumstances. But does it mean that our digital assistants are ready to escape the confines of smartphones, smart speakers and computers and a&nbsp;bunch of weird gadgets? The only way to make smart assistants really smart is to give it eyes and let it explore the world. While the idea of putting a face on the voices of digital assistants sounds appealing, the truth is that with today&rsquo;s AI technology, such an idea is doomed to fail.<\/p>\n","protected":false},"author":109,"featured_media":2688,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[1946],"class_list":["post-1688","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":1946,"user_id":109,"is_guest":0,"slug":"ben-dickson","display_name":"Ben Dickson","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_8aaf6bea-c4c1-455f-8156-8007d70910f8-150x150.jpg","user_url":"https:\/\/bdtechtalks.com\/","last_name":"Dickson","first_name":"Ben","job_title":"","description":"Ben Dickson is an experienced software engineer and tech blogger. He contributes regularly to major tech websites such as the Next Web, the Daily Dot, PCMag.com, Cointelegraph, VentureBeat, International Business Times UK, and The Huffington Post."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1688","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=1688"}],"version-history":[{"count":3,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1688\/revisions"}],"predecessor-version":[{"id":29011,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1688\/revisions\/29011"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/2688"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=1688"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=1688"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=1688"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=1688"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}