{"id":1962,"date":"2019-09-19T04:55:23","date_gmt":"2019-09-19T04:55:23","guid":{"rendered":"http:\/\/kusuaks7\/?p=1567"},"modified":"2024-04-03T11:34:44","modified_gmt":"2024-04-03T11:34:44","slug":"how-randomness-can-protect-neural-networks-against-adversarial-attacks","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/how-randomness-can-protect-neural-networks-against-adversarial-attacks\/","title":{"rendered":"How randomness can protect neural networks against adversarial attacks"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"1962\" class=\"elementor elementor-1962\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-6ded5031 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6ded5031\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-12ba8189\" data-id=\"12ba8189\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4b4a7980 elementor-widget elementor-widget-text-editor\" data-id=\"4b4a7980\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAs\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/02\/15\/what-is-deep-learning-neural-networks\/\" rel=\"noopener\">deep learning and neural networks<\/a>\u00a0become more and more prominent in important tasks, there\u2019s increasing concern over how they might be compromised for evil purposes. It\u2019s one thing for an attacker to hack your Netflix content recommendation algorithm, but a totally different problem when it\u2019s your self-driving car that\u2019s being fooled to bypass a stop sign or miss to detect a pedestrian.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-fd00d00 elementor-widget elementor-widget-text-editor\" data-id=\"fd00d00\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAs we continue to learn about the unique\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/12\/27\/deep-learning-adversarial-attacks-ai-malware\/\" rel=\"noopener\">security threats of deep learning algorithms<\/a>\u00a0entail, one of the areas of focus are adversarial attacks, perturbation in input data that cause artificial intelligence algorithms to behave in unexpected (and perhaps dangerous) ways.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-86bca99 elementor-widget elementor-widget-text-editor\" data-id=\"86bca99\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn the past few years, there have been several efforts to raise awareness on the threat of adversarial attacks against deep learning algorithms. In parallel, researchers are working on ways to build robust AI models that are more resilient against adversarial examples. Protecting deep learning algorithms against adversarial perturbation will be key to deploying AI in more sensitive settings.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9229cfd elementor-widget elementor-widget-text-editor\" data-id=\"9229cfd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tIn a paper presented at 2019 International Joint Conference on Artificial Intelligence (IJCAI 2019), researchers from IBM, Northeastern University and Boston University introduced a method that protects neural networks against adversarial perturbations by introducing randomness to the way the AI models work.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b539194 elementor-widget elementor-widget-text-editor\" data-id=\"b539194\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tTitled \u201c<a href=\"https:\/\/www.ijcai.org\/proceedings\/2019\/0833.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Protecting Neural Networks with Hierarchical Random Switching<\/a>,\u201d the technique is not the first effort that aims to address the threat of adversarial attacks. But it contains some novel concepts and methods that reduce the costs and complexities of developing robust AI models.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ed32d59 elementor-widget elementor-widget-heading\" data-id=\"ed32d59\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>A primer on AI adversarial attacks<\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c947150 elementor-widget elementor-widget-text-editor\" data-id=\"c947150\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tDeep learning algorithms can perform remarkable feats thanks to\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/08\/05\/what-is-artificial-neural-network-ann\/\" rel=\"noopener\">artificial neural networks<\/a>, an AI software structure inspired from the human brain. Neural networks develop their behavior by reviewing numerous samples and discovering statistical regularities between them. For instance, when you train a neural network with labeled examples of stop signs, it will compare the images and, based on their similarities, develop a highly complex math function with thousands of parameters that can extract familiar patterns from other images. The AI will then be able to detect stop signs in new photos and videos.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-aeb8d58 elementor-widget elementor-widget-text-editor\" data-id=\"aeb8d58\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe problem with neural networks, however, is that the way they develop their pattern recognition behavior is\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/09\/25\/explainable-interpretable-ai\/\" rel=\"noopener\">very complex and opaque<\/a>. And despite their name, neural networks work in ways that are\u00a0<a href=\"https:\/\/bdtechtalks.com\/2018\/08\/21\/artificial-intelligence-vs-human-mind-brain\/\" rel=\"noopener\">very different from the human brain<\/a>. That\u2019s why they can be fooled in ways that will be unnoticed by humans.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e6292f0 elementor-widget elementor-widget-text-editor\" data-id=\"e6292f0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAdversarial examples are input data manipulated in ways that will force a neural network to change its behavior while maintaining the same meaning to a human observer. For instance, in the case of an image classifier neural network, adding a special layer of noise to an image will cause the AI to assign a different classification to it.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e02ce43 elementor-widget elementor-widget-image\" data-id=\"e02ce43\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/02\/ai-adversarial-example-panda-gibbon.png?fit=696%2C271&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6a69e78 elementor-widget elementor-widget-text-editor\" data-id=\"6a69e78\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: center;\"><span style=\"font-size: 11px;\">Adversarial examples involve adding carefully crafted layers of noise to images to force neural networks to change their classification (source:\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1412.6572.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Arxiv<\/a>)<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f4fafd9 elementor-widget elementor-widget-text-editor\" data-id=\"f4fafd9\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tWhile most of the work done in the field has focused on image classification AI, adversarial examples also apply to neural networks that process other kinds of information. For instance, a well-crafted\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/04\/29\/ai-audio-adversarial-examples\/\" rel=\"noopener\">audio adversarial example<\/a>\u00a0can hide a command in a song that will activate an AI-powered voice assistant without being heard by humans. Likewise,\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/04\/02\/ai-nlp-paraphrasing-adversarial-attacks\/\" rel=\"noopener\">text adversarial attacks<\/a>\u00a0can bypass AI-powered spam filters and sentiment analysis systems while remaining inconspicuous to human readers.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d98f7a8 elementor-widget elementor-widget-heading\" data-id=\"d98f7a8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>Adversarial training<\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9416277 elementor-widget elementor-widget-text-editor\" data-id=\"9416277\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe traditional method to make AI models robust against adversarial examples is \u201cadversarial training.\u201d When performing adversarial training, AI engineers use tools to probe their models for adversarial vulnerabilities. They then use all the adversarial examples they discovered to retrain their model and make it more robust.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9c0ca3a elementor-widget elementor-widget-text-editor\" data-id=\"9c0ca3a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThere are several factors that make adversarial training unfavorable.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-11eec43 elementor-widget elementor-widget-text-editor\" data-id=\"11eec43\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAdversarial training is a costly process. Its effectiveness depends on the complexity of your model and how much time and resources you can allocate to investigate your model. There are several tools that can\u00a0<a href=\"https:\/\/bdtechtalks.com\/2019\/02\/20\/mit-ibm-ai-robustness-adversarial-examples\/\" rel=\"noopener\">reduce the costs of discovering adversarial vulnerabilities<\/a>, but you still need to do the training.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f61a3cf elementor-widget elementor-widget-text-editor\" data-id=\"f61a3cf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAlso, there are several types of adversarial attacks. Protecting an AI model against each attack method requires a separate adversarial training process. And adversarial training is often incompatible with other training procedures and requires engineers to make modifications to their AI models.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7f1e553 elementor-widget elementor-widget-heading\" data-id=\"7f1e553\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>Fending off adversarial attacks through randomness<\/h2>\n<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d2fc6ae elementor-widget elementor-widget-image\" data-id=\"d2fc6ae\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2019\/05\/neural-network-deep-learning.jpg?fit=696%2C409&#038;ssl=1\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3c3350c elementor-widget elementor-widget-text-editor\" data-id=\"3c3350c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAnother method to make AI models robust against adversarial examples is stochastic defense. The idea behind stochastic defense is to introduce randomness to the behavior of neural networks. Adding randomness to the neural network is a strong defense method because it increases the cost for the attacker to stage a successful attack against the AI model.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-08a1e8d elementor-widget elementor-widget-text-editor\" data-id=\"08a1e8d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\u201cStochastic defense is a promising branch among all techniques that have been proposed,\u201d says Pin-Yu Chen, researcher at the MIT-IBM Watson AI Lab and co-author of the hierarchical random switching (HRS) paper. \u201cIn a deterministic AI model, attackers search until they find an adversarial example that fools that particular neural network. But if you have a random model, the attacker will need to find a better attack that can work on all the random variations of the model.\u201d\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-81b9293 elementor-widget elementor-widget-text-editor\" data-id=\"81b9293\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tAnother benefit of stochastic defense is that, unlike adversarial training, it is independent of the method of adversarial attack. \u201cFor adversarial training you need to specify what type of attacks you\u2019re going to train on to improve your AI model\u2019s robustness,\u201d Chen says. \u201cFor the randomized approach, you don\u2019t need to do that. You just add a level of randomness to the AI model and it becomes inherently more robust rather than robust against a specific type of adversarial attack.\u201d\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f3f906d elementor-widget elementor-widget-text-editor\" data-id=\"f3f906d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tHowever, the robustness provided by traditional stochastic defense method doesn\u2019t come for free. \u201cBecause you add noise to the weights and activation functions and the inputs and outputs of the neural network, you lose some of your accuracy,\u201d Chen says.\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ee87e5e elementor-widget elementor-widget-text-editor\" data-id=\"ee87e5e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tThe goal of the HRS method was to benefit from the advantages of random defense methods while minimizing the tradeoffs. \u201cWe want to make sure our defense methods are general enough in the sense that they\u2019re compatible with current machine learning training pipelines, so developers only need to add a few more lines of code to make their models more robust,\u201d Chen says, adding that at the same time, the aim is to achieve better accuracy-robustness tradeoff. \u201cIf you allow one percent drop in your accuracy, you would want to have the robustness to be as high as possible.\u201d\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c974a9a elementor-widget elementor-widget-heading\" data-id=\"c974a9a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><h2>Hierarchical random switching<\/h2><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7b6caab elementor-widget elementor-widget-text-editor\" data-id=\"7b6caab\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tTo understand how the hierarchical random switching method works, consider a neural network with many layers. When applying the HRS technique, the network is first divided into several blocks, each containing multiple layers of the entire network. Next, each block is populated by several parallel channels.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6957644 elementor-widget elementor-widget-image\" data-id=\"6957644\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/bdtechtalks.com\/2019\/08\/20\/ai-adversarial-examples-hierarchical-random-switching\/hierarchical-random-switching-adversarial-examples-defense\/\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-67aea74 elementor-widget elementor-widget-text-editor\" data-id=\"67aea74\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: center;\"><span style=\"font-size: 11px;\">In the HRS defense method, the layers of the neural network is split into several blocks, the layer of each block are replicated across several channels (source:\u00a0<a href=\"https:\/\/www.ijcai.org\/proceedings\/2019\/0833.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">IJCAI<\/a>)<\/span><\/p>\nWhen doing inference, HRS selects a random channel for each of the blocks in the neural network and connects them together. Each combination of channels and blocks provides a unique AI model. The HRS uses a special hierarchical training method to make sure each channel of the neural network has its own unique weights but maintains the maximum possible accuracy of the AI model.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-607ef76 elementor-widget elementor-widget-text-editor\" data-id=\"607ef76\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\u201cEach model should work like the base model, but it must also have randomness,\u201d Chen says. \u201cThis is why we do hierarchical training. Because we train all the channels from bottom to top and enumerate each path, we make sure no matter which channel is chosen, the path that is selected provides a good AI model.\u201d\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-940c5d2 elementor-widget elementor-widget-text-editor\" data-id=\"940c5d2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tCompared to classic stochastic defense methods, HRS provides an improved robustness-accuracy tradeoff. This means the accuracy penalty your AI model suffers in exchange for the protection it enjoys against adversarial attacks is minimized. At the same time, HRS is even more robust against the types of adversarial attacks that can break classic stochastic methods.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c12b6a6 elementor-widget elementor-widget-text-editor\" data-id=\"c12b6a6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tBecause HRS expands the number of channels a neural network contains, it comes at a memory cost. AI models strengthened with HRS become larger. In this regard, however, it is not much different from other adversarial defense methods.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b8c9bfd elementor-widget elementor-widget-text-editor\" data-id=\"b8c9bfd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n\u201cDuring adversarial training, engineers expand neural networks. For each layer, they add more neurons to memorize the mistakes and make the AI model more robust. The same applies to the HRS method,\u201d Chen says. \u201cSo basically, to make your model more robust, you need a larger capacity for your network.\u201d\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f4ff5d1 elementor-widget elementor-widget-text-editor\" data-id=\"f4ff5d1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\nBut contrary to many\u00a0existing defense proposals, HRS is compatible with current training methods. So developers don\u2019t need to add layers or change the architecture.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-39b940e elementor-widget elementor-widget-text-editor\" data-id=\"39b940e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\nHRS is just one of several efforts in making AI models more robust against the growing threat of adversarial threats. As deep learning and neural networks take over more and more important functions of our daily lives, we need all the help we can get to make sure they\u2019re secure and robust.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>As\u00a0deep learning and neural networks\u00a0become more and more prominent in important tasks, there\u2019s increasing concern over how they might be compromised for evil purposes. It\u2019s one thing for an attacker to hack your Netflix content recommendation algorithm, but a totally different problem when it\u2019s your self-driving car that\u2019s being fooled to bypass a stop sign<\/p>\n","protected":false},"author":109,"featured_media":3993,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[92],"ppma_author":[1946],"class_list":["post-1962","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-machine-learning"],"authors":[{"term_id":1946,"user_id":109,"is_guest":0,"slug":"ben-dickson","display_name":"Ben Dickson","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_8aaf6bea-c4c1-455f-8156-8007d70910f8-150x150.jpg","user_url":"https:\/\/bdtechtalks.com\/","last_name":"Dickson","first_name":"Ben","job_title":"","description":"Ben Dickson is an experienced software engineer and tech blogger. He contributes regularly to major tech websites such as the Next Web, the Daily Dot, PCMag.com, Cointelegraph, VentureBeat, International Business Times UK, and The Huffington Post."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1962","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=1962"}],"version-history":[{"count":4,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1962\/revisions"}],"predecessor-version":[{"id":36554,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1962\/revisions\/36554"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/3993"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=1962"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=1962"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=1962"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=1962"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}