{"id":1000,"date":"2018-11-23T03:57:51","date_gmt":"2018-11-23T03:57:51","guid":{"rendered":"http:\/\/kusuaks7\/?p=605"},"modified":"2023-09-13T11:31:09","modified_gmt":"2023-09-13T11:31:09","slug":"learning-ai-if-you-suck-at-math-part3-building-an-ai-dream-machine","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/learning-ai-if-you-suck-at-math-part3-building-an-ai-dream-machine\/","title":{"rendered":"Learning AI if You Suck at Math\u200a-\u200aPart 3\u200a-Building an AI Dream Machine"},"content":{"rendered":"<p><strong><em>Ready to learn Artificial Intelligence? <a href=\"https:\/\/www.experfy.com\/training\/courses\">Browse courses<\/a>\u00a0like\u00a0 <a href=\"https:\/\/www.experfy.com\/training\/courses\/uncertain-knowledge-and-reasoning-in-artificial-intelligence\">Uncertain Knowledge and Reasoning in Artificial Intelligence<\/a> developed by industry thought leaders and Experfy in Harvard Innovation Lab.<\/em><\/strong><\/p>\n<p>Welcome to the third installment of\u00a0Learning AI if You Suck at Math. If you missed the earlier articles, be sure to check out\u00a0<a href=\"https:\/\/www.experfy.com\/blog\/learning-ai-if-you-suck-at-math-part-1\">part 1<\/a>,\u00a0<a href=\"https:\/\/www.experfy.com\/blog\/learning-ai-if-you-suck-at-math-part-two-practical-projects\">part 2<\/a>.<\/p>\n<p id=\"857d\">Today we\u2019re going to build our own\u00a0<strong>Deep Learning Dream Machine<\/strong>.<\/p>\n<ul>\n<li id=\"ff62\">We\u2019ll source the best parts and put them together into a number smashing monster.<\/li>\n<li id=\"aa85\">We\u2019ll also walk through installing all the latest deep learning frameworks step by step on Ubuntu Linux 16.04.<\/li>\n<\/ul>\n<p id=\"f956\">This machine will slice through neural networks like a hot laser through butter. Other than forking over $129,000 for\u00a0<a href=\"http:\/\/www.nvidia.com\/object\/deep-learning-system.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/www.nvidia.com\/object\/deep-learning-system.html\" data->Nvidia\u2019s DGX-1<\/a>, the AI supercomputer in a box, you simply can\u2019t get better performance than what I\u2019ll show you right here.<\/p>\n<ul>\n<li id=\"5766\">Lastly, if you\u2019re working with a tighter budget, don\u2019t despair, I\u2019ll also outline very budget friendly alternatives.<\/li>\n<\/ul>\n<h3 id=\"8af3\"><strong>First, a TL;DR, Ultracheap Upgrade\u00a0Option<\/strong><\/h3>\n<p id=\"c218\">Before we dig into building a DL beast, I want to give you the easiest upgrade path.<\/p>\n<p id=\"1620\"><strong>If you don\u2019t want to build an entirely new machine, you still have one perfectly awesome option.<\/strong><\/p>\n<figure id=\"8711\" data-scroll=\"native\"><canvas width=\"75\" height=\"47\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*zlVw4lkpJSd236axzH6VCQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*zlVw4lkpJSd236axzH6VCQ.jpeg\" \/><\/figure>\n<p id=\"f450\"><strong>Simply upgrade your GPU (with either a\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2kucHoB\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kucHoB\" data-><strong>Titan X<\/strong><\/a><strong>\u00a0or a\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2kXYpJZ\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kXYpJZ\" data-><strong>GTX 1080<\/strong><\/a><strong>) and get\u00a0<\/strong><a href=\"https:\/\/my.vmware.com\/en\/web\/vmware\/info\/slug\/desktop_end_user_computing\/vmware_workstation_pro\/12_0\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"broken_link\"><strong>VMware Workstation<\/strong><\/a><strong>\u00a0or use another virtualization software that supports\u00a0<\/strong><a href=\"https:\/\/pubs.vmware.com\/workstation-12\/index.jsp?topic=%2Fcom.vmware.ws.using.doc%2FGUID-F5186526-2382-4F4A-8009-3D07773A1404.html\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"broken_link\"><strong>GPU acceleration<\/strong><\/a><strong>!\u00a0<\/strong>Or you could simply install Ubuntu bare metal and if you need a Windows machine run that in a VM, so you max your performance for deep learning.<\/p>\n<p id=\"a1e1\">Install Ubuntu and the DL frameworks using the tutorial at the end of the article and bam! You just bought yourself a deep learning superstar on the cheap!<\/p>\n<p id=\"5068\">All right, let\u2019s get to it.<\/p>\n<p id=\"9fb1\">I\u2019ll mark dream machine parts and budget parts like so:<\/p>\n<ul>\n<li id=\"1cfd\"><strong>MINO<\/strong>\u00a0(Money is No Object) =\u00a0<strong>Dream Machine<\/strong><\/li>\n<li id=\"480a\"><strong>ADAD<\/strong>\u00a0(A Dollar and a Dream) =\u00a0<strong>Budget Alternative<\/strong><\/li>\n<\/ul>\n<h3 id=\"b949\"><strong>Dream Machine Parts Extravaganza<\/strong><\/h3>\n<h4 id=\"9136\"><strong>GPUs First<\/strong><\/h4>\n<p id=\"2fa9\">CPUs are no longer the center of the universe.\u00a0<a href=\"https:\/\/hackernoon.com\/tagged\/ai\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/hackernoon.com\/tagged\/ai\" data->AI<\/a>\u00a0applications have flipped the script. If you\u2019ve ever build a custom rig for gaming, you probably pumped it up with the baddest Intel chips you could get your hands on.<\/p>\n<p id=\"a2fe\">But times change.<\/p>\n<p id=\"52b1\"><a href=\"http:\/\/www.forbes.com\/sites\/aarontilley\/2016\/11\/30\/nvidia-deep-learning-ai-intel\/\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"broken_link\"><strong>Nvidia is the new Intel<\/strong><\/a><strong>.<\/strong><\/p>\n<p id=\"c9b8\">The most important component of any deep learning world destroyer is the GPU(s).<\/p>\n<p id=\"3e12\">While AMD have made headway in cyptocoin mining in the last few years, they have yet to make their mark on AI. That will change soon, as they race to capture a piece of this exploding field, but for now Nvidia is king. And don\u2019t sleep on Intel either. They purchased\u00a0<a href=\"http:\/\/www.yaabot.com\/26356\/intels-deep-learning-based-chips-launching-2017\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/www.yaabot.com\/26356\/intels-deep-learning-based-chips-launching-2017\/\" data->Nervana Systems and plan to put out their own deep learning ASICs in 2017<\/a>.<\/p>\n<figure id=\"647a\" data-scroll=\"native\"><canvas width=\"75\" height=\"40\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*7KFTXyr-M7z16W7r6BdkhQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*7KFTXyr-M7z16W7r6BdkhQ.jpeg\" \/><figcaption>\u00a0<\/figcaption><\/figure>\n<p style=\"text-align: center;\">The king of DL\u00a0GPUs<\/p>\n<p id=\"3cc0\"><strong>Let\u2019s start with MINO. The ultimate GPU is the Titan X. It has no competition.<\/strong><\/p>\n<p id=\"c655\">It\u2019s packed with 3584 CUDA cores at 1531 MHz, 12GB of G5X and it boasts a memory speed of 10 Gbps.<\/p>\n<p id=\"91b0\">In DL, cores matter and so does more memory close to those cores.<\/p>\n<p id=\"6a31\">DL is really nothing but a lot of linear algebra. Think of it as an insanely large Excel sheet. Crunching all those numbers would slaughter a standard 4 or 8 core Intel CPU.<\/p>\n<p id=\"2446\">Moving data in and out of memory is a massive bottleneck, so more memory on the card makes all the difference, which is why the Titan X is the king of the world.<\/p>\n<p id=\"99de\"><strong>You can\u00a0<\/strong><a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/products\/10series\/titan-x-pascal\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/products\/10series\/titan-x-pascal\/\" data-><strong>get Titan X directly from Nvidia for $1,200 MSRP.<\/strong><\/a><strong>\u00a0Unfortunately, you\u2019re limited to two. But this is a Dream Machine and we\u2019re buying four. That\u2019s right quad SLI!<\/strong><\/p>\n<p id=\"93aa\"><strong>For that\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2kucHoB\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kucHoB\" data-><strong>you\u2019ll need to pay a slight premium from a third party seller<\/strong><\/a><strong>. Feel free to get two from Nvidia and two from Amazon. That will bring you to $5300, by far the bulk of the cost for this workstation.<\/strong><\/p>\n<p id=\"47cf\">Now if you\u2019re just planning to run Minecraft, it\u2019ll still look blocky but if\u00a0<a href=\"https:\/\/www.kaggle.com\/c\/data-science-bowl-2017\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.kaggle.com\/c\/data-science-bowl-2017\" data->you want to train a model to beat cancer<\/a>, these are your cards.\u00a0\ud83d\ude42<\/p>\n<p id=\"9238\">Gaming hardware benchmark sites will tell you\u00a0<a href=\"https:\/\/us.hardware.info\/reviews\/6033\/13\/nvidia-geforce-gtx-titan-x-sli--3-way-sli--4-way-sli-review-insane-performance-conclusion\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/us.hardware.info\/reviews\/6033\/13\/nvidia-geforce-gtx-titan-x-sli--3-way-sli--4-way-sli-review-insane-performance-conclusion\" data->that anything more than two cards is well past the point of diminishing returns<\/a>\u00a0<strong><em>but that\u2019s just for gaming\u00a0!<\/em><\/strong>When it comes to AI you\u2019ll want to hurl as many cards at it as you can. Of course, AI has its point of diminishing returns too but it\u2019s closer to dozens or hundreds of cards (depending on the algo), not four. So stack up, my friend.<\/p>\n<p id=\"9b4a\">Please note you will NOT need an SLI bridge, unless you\u2019re also planning to use this machine for gaming. That\u2019s strictly for graphics rendering and we\u2019re doing very little graphics here, other than plotting a few graphs in matplotlib.<\/p>\n<h3 id=\"c686\">Budget-Friendly Alternative GPUs<\/h3>\n<figure id=\"e435\" data-scroll=\"native\"><canvas width=\"75\" height=\"75\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*6PdR-FcBez4xGU6BQQhSGQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*6PdR-FcBez4xGU6BQQhSGQ.jpeg\" \/><\/figure>\n<p id=\"5100\"><strong>Your ADAD card is the GeForce GTX 1080 Founders Edition. The 1080 packs 2560 CUDA cores, a lot less than the Titan X, but it rings in at half the price, with an MSRP of $699.<\/strong><\/p>\n<p id=\"0eea\">It also boasts less RAM, at 8GB versus 12.<\/p>\n<p id=\"aeb9\"><a href=\"http:\/\/amzn.to\/2kXYpJZ\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kXYpJZ\" data->EVGA has always served me well so grab four of them for your machine<\/a>. At $2796 vs $5300, that\u2019s a lot of savings for nearly equivalent performance.<\/p>\n<p id=\"4388\">The second best choice for ADAD is the GeForce GTX 1070. It packs 1920 CUDA cores so it\u2019s still a great choice. It comes in at around $499 MSRP but\u00a0<a href=\"http:\/\/amzn.to\/2kukDpR\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kukDpR\" data-><strong>superclocked EVGA 1070s will run you only $389 bucks<\/strong><\/a>\u00a0so that brings the price to a more budget-friendly $1556. Very doable.<\/p>\n<p id=\"6114\">Of course if you don\u2019t have as much money to spend you can always get two or three cards. Even one will get you moving in the right direction.<\/p>\n<p id=\"da7d\">Let\u2019s do the math on best bang for the buck with two or three cards:<\/p>\n<ul>\n<li id=\"6aa3\">3 x Titan X = 10,752 CUDA cores, 36GB of GPU RAM = $3800<\/li>\n<li id=\"85b0\">2 x Titan X = 7,167 CUDA cores, 24 GB of GPU RAM = $2400<\/li>\n<li id=\"5482\"><strong>3 x GTX 1080 = 7,680 CUDA cores, 24GB of GPU RAM = $2097<\/strong><\/li>\n<li id=\"8332\">2 x GTX 1080 = 5,120 CUDA cores, 16GB of GPU RAM = $1398<\/li>\n<li id=\"c4f1\">3 x GTX 1070 = 5,760 CUDA cores, 24GB of GPU RAM = $1167<\/li>\n<li id=\"620d\">2 x GTX 1070 = 3,840 CUDA cores, 16GB of GPU RAM = $778<\/li>\n<\/ul>\n<p id=\"3efb\"><strong>The sweet spot is 3 GTX 1080s. For half the price you\u2019re only down 3072 cores. Full disclosure: That\u2019s how I built my workstation.<\/strong><\/p>\n<h3 id=\"d263\"><strong>SSD and Spinning\u00a0Drive<\/strong><\/h3>\n<figure id=\"4803\" data-scroll=\"native\"><canvas width=\"75\" height=\"52\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*HIz-1Zbnbn9KHNjLkDPbog.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*HIz-1Zbnbn9KHNjLkDPbog.jpeg\" \/><\/figure>\n<p id=\"fd05\">You\u2019ll want an SSD, especially if you\u2019re building Convolutional Neural Nets and working with lots of image data.\u00a0<a href=\"http:\/\/amzn.to\/2kY5gDf\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kY5gDf\" data-><strong>The Samsung 850 EVO 1 TB<\/strong><\/a><strong>\u00a0is the best of the best right now.<\/strong>Even better, SSD prices have plummeted in the last year, so it won\u2019t break the bank. The 850 1 TB currently comes in at about $319 bucks.<\/p>\n<p id=\"4ae5\"><strong>The ADAD version of the\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2kXKTpQ\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kXKTpQ\" data-><strong>850 is the 250GB version<\/strong><\/a><strong>. It\u2019s very easy on the wallet at $98.<\/strong><\/p>\n<p id=\"77ea\">You\u2019ll also want a spindle drive for storing downloads. Datasets can be massive in DL. A\u00a0<a href=\"http:\/\/amzn.to\/2jyrKhN\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2jyrKhN\" data->4 TB Seagate Barracuda<\/a>\u00a0will do the trick.<\/p>\n<h3 id=\"6675\"><strong>Motherboard<\/strong><\/h3>\n<figure id=\"be7b\" data-scroll=\"native\"><canvas width=\"75\" height=\"57\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*OvOmPIvuiN5sh393vMSMsA.png\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*OvOmPIvuiN5sh393vMSMsA.png\" \/><\/figure>\n<p id=\"cd13\">Because we want to stuff four GPUs into this box your motherboard options narrow to a very small set of choices.\u00a0<strong>To support four cards at full bus speeds we want the\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2kUJGjY\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kUJGjY\" data-><strong>MSI Extreme Gaming X99A SLI Plus<\/strong><\/a><strong>.<\/strong><\/p>\n<p id=\"84fe\">You can also go with the\u00a0<a href=\"http:\/\/amzn.to\/2kY2cHa\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kY2cHa\" data->ASUS X99 Deluxe II<\/a>.<\/p>\n<p id=\"7d88\">If you go with less than four cards you have many more options. When it comes to motherboards, I favor stability. I learned this the hard way building cryptocoin mining rigs. If you run your GPUs constantly they\u2019ll burn your machine to the ground in no time.\u00a0<strong>Gigabyte make an excellent line of very durable motherboards. The\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2jyhWnT\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2jyhWnT\" data-><strong>X99 Ultra Gaming is absolutely rock solid<\/strong><\/a><strong>\u00a0and comes in at $237.<\/strong><\/p>\n<h3 id=\"a46a\"><strong>Case<\/strong><\/h3>\n<figure id=\"e58b\" data-scroll=\"native\"><canvas width=\"62\" height=\"75\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*_suoqINRPa1K-woIo5ziFQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*_suoqINRPa1K-woIo5ziFQ.jpeg\" \/><\/figure>\n<p id=\"19b1\"><strong>The\u00a0<\/strong><a href=\"http:\/\/amzn.to\/2jFikMF\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2jFikMF\" data-><strong>Cooler Master Cosmos II<\/strong><\/a><strong>is the ultimate full tower case.<\/strong>It\u2019s sleek and stylish racecar design of brushed aluminum and steel make for one beautiful machine.<\/p>\n<p id=\"ed0f\">If you want a mid-tower case, you can\u2019t go wrong with the\u00a0<a href=\"http:\/\/amzn.to\/2ku9ZQ3\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2ku9ZQ3\" data->Cooler Master Maker 5T<\/a>.<\/p>\n<p id=\"b922\">I never favor getting a cheap-ass case for any machine. As soon as you have to open it to troubleshoot it, your mistake becomes glaringly clear. Tool-less cases are ideal. But there are plenty of decent budget cases out there so do your homework.<\/p>\n<h3 id=\"94b5\"><strong>CPU<\/strong><\/h3>\n<p id=\"6203\">Your deep learning machine doesn\u2019t need much CPU power. Most apps are single threaded as they load the data into the GPUs where they do multicore work, so don\u2019t bother spending a lot of capital here.<\/p>\n<figure id=\"af5e\" data-scroll=\"native\"><canvas width=\"66\" height=\"75\"><\/canvas><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*P28-O6Rr4qLoy7p94zQ3kQ.jpeg\" data-src=\"https:\/\/cdn-images-1.medium.com\/max\/600\/1*P28-O6Rr4qLoy7p94zQ3kQ.jpeg\" \/><\/figure>\n<p id=\"0ef8\">That said, you might as well get the fastest clock speed for your processor, which is 4GHz on the i7\u20136700K.\u00a0<a href=\"http:\/\/amzn.to\/2kjhmaO\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kjhmaO\" data->You can snag it here with a fan<\/a>. Frankly, it\u2019s ridiculous overkill here but prices have dropped drastically and I was looking for single-threaded performance. This is the CPU to beat.<\/p>\n<p id=\"e052\">If you want to go quieter then\u00a0<a href=\"http:\/\/amzn.to\/2k12NKv\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2k12NKv\" data->you can go with watercooling<\/a>but you won\u2019t be running the CPU that hard. Most of the fan noise will come from the GPUs.<\/p>\n<p id=\"8284\">There\u2019s no great ADAD alternative here. The\u00a0<a href=\"http:\/\/amzn.to\/2kY31Qo\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kY31Qo\" data->i5 at 3.5GHz with a water cooler<\/a>\u00a0runs about the same cost as the 4GHz so why bother?<\/p>\n<h3 id=\"8518\"><strong>Power<\/strong><\/h3>\n<p id=\"2a7b\">The\u00a0<a href=\"http:\/\/amzn.to\/2kuhXsz\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/amzn.to\/2kuhXsz\" data->EVGA Modular 1600W Supernova G2 power supply<\/a>\u00a0is your best bet for a quad SLI setup. It will run you about $305 bucks.<\/p>\n<p id=\"297c\">The Titan X\u2019s pull about 250 Watts each which brings you to 1000W easy. That doesn\u2019t leave much overhead for CPU, memory, and systems power so go with the biggest supply to leave some head room.<\/p>\n<p id=\"5088\">If you\u2019re rocking less cards than go with the 1300W version, which drops the price to a more manageable $184.<\/p>\n<h3 id=\"7445\"><strong>Software Setup<\/strong><\/h3>\n<p id=\"fadf\">Now that we\u2019re done with the hardware, let\u2019s get to the software setup.<\/p>\n<p id=\"948d\">You have three options:<\/p>\n<ul>\n<li id=\"72c0\"><strong>Docker Container<\/strong><\/li>\n<li id=\"2cbd\"><strong>Virtual Machine<\/strong><\/li>\n<li id=\"52cc\"><strong>Bare Metal install<\/strong><\/li>\n<\/ul>\n<h4 id=\"4e92\"><strong>Docker<\/strong><\/h4>\n<p id=\"4a33\">If you want to go with the Docker option, you\u2019ll want to start with\u00a0<a href=\"https:\/\/github.com\/NVIDIA\/nvidia-docker\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/NVIDIA\/nvidia-docker\" data->the official Nvidia-Docker<\/a>\u00a0project as a foundation. However to really get all of the frameworks, libraries and languages you\u2019ll have to do a lot of installation on top of this image.<\/p>\n<p id=\"6b9f\"><strong>You can go with an all-in-one deep learning container, like\u00a0<\/strong><a href=\"https:\/\/github.com\/floydhub\/dl-docker\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/floydhub\/dl-docker\" data-><strong>this one on GitHub<\/strong><\/a><strong>.<\/strong><\/p>\n<p id=\"02f3\">I wanted to love the all-in-one Docker image, but it has a few issues, no surprise considering the complexity of the setup.<\/p>\n<p id=\"822d\">I\u00a0<a href=\"https:\/\/github.com\/floydhub\/dl-docker\/issues\/36\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/floydhub\/dl-docker\/issues\/36\" data->found the answer to one issue<\/a>\u00a0(libopenjpeg2 is now libopenjpeg5 on Ubuntu 16.04 LTS) but I got tired of\u00a0<a href=\"https:\/\/github.com\/floydhub\/dl-docker\/issues\/38\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/floydhub\/dl-docker\/issues\/38\" data->troubleshooting a second one<\/a>. I\u2019m still waiting on fixes. If you\u2019re the type of person who likes fixing Dockerfiles and submitting fixes on GitHub, I encourage you to support the all-in-one project.<\/p>\n<p id=\"49e6\">A second major challenge is that it\u2019s a very, very big image, so it won\u2019t fit on Dockerhub due to timeouts. That means you\u2019ll have to build it yourself and that can take several hours of compiling and pulling layers and debugging, which is about as much time as you need to do it bare metal.<\/p>\n<p id=\"3a80\">Lastly, it doesn\u2019t include everything I wanted, including Anaconda Python.<\/p>\n<p id=\"f78f\">In the end I decided to use\u00a0<a href=\"https:\/\/github.com\/floydhub\/dl-setup\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/floydhub\/dl-setup\" data->the all-in-one bare metal tutorial<\/a>\u00a0as a guide, while updating it and adding my own special sauce.<\/p>\n<h4 id=\"2f40\">Virtual Machine<\/h4>\n<p id=\"20b7\">As I noted in the TL;DR section at the beginning of the doc, you can absolutely upgrade a current gaming machine, add VMware Workstation Pro, which supports GPU passthrough, and have a nice way to get started on a shoestring. This is a strong budget-friendly strategy. It also has several advantages, in that you can easily backup the virtual machine, snapshot and roll it back. It doesn\u2019t start as fast as a Docker container, but VM tech is very mature at this point and that gives you a lot of tools and best practices.<\/p>\n<h4 id=\"d747\"><strong>Bare Metal<\/strong><\/h4>\n<p id=\"f32e\">This is the option I ended up going with on my machine. It\u2019s a little old school, but as a long time sys-admin it made the most sense to me, as it gave me the ultimate level of control.<\/p>\n<p id=\"2e07\">A few things of note about the software for\u00a0<a href=\"https:\/\/hackernoon.com\/tagged\/deep-learning\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/hackernoon.com\/tagged\/deep-learning\" data->deep learning<\/a>\u00a0before we get started.<\/p>\n<p id=\"8057\">You\u2019ll find that the vast majority of AI research is done in Python. That\u2019s because it\u2019s an easy language to learn and setup. I\u2019m not sure that Python will end up as the primary language once AI moves into production but for now Python is the way to go. A number of the major frameworks run on top of it and its scientific libraries are second to none.<\/p>\n<p id=\"a5a7\">The R language gets a lot of love too, as well as Scala, so we will add those to the equation.<\/p>\n<p id=\"35f6\">Here are a list of the major packages we\u2019ll set up in this tutorial:<\/p>\n<h4 id=\"f2b5\"><strong>Languages<\/strong><\/h4>\n<ul>\n<li id=\"0598\"><strong>Python 2.x<\/strong><\/li>\n<li id=\"e709\"><strong>Anaconda<\/strong>\u00a0(and by extension Python 3.6) \u2014 Anaconda is a high-performance distribution of Python and includes over a 100 of the most popular Python, R and Scala packages for data science.<\/li>\n<li id=\"74dc\"><a href=\"https:\/\/www.r-project.org\/about.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.r-project.org\/about.html\" data-><strong>R<\/strong><\/a> \u2014 A language and environment for statistical computing and graphics.<\/li>\n<li id=\"fad8\"><a href=\"https:\/\/www.scala-lang.org\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.scala-lang.org\/\" data-><strong>Scala<\/strong><\/a> \u2014 Scala is an acronym for \u201cScalable Language.\u201d It\u2019s similar to Java but super high performance and modular.<\/li>\n<\/ul>\n<h4 id=\"abbc\"><strong>Drivers and\u00a0APIs<\/strong><\/h4>\n<ul>\n<li id=\"fb28\"><a href=\"http:\/\/www.nvidia.com\/Download\/index.aspx?lang=en-us\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/www.nvidia.com\/Download\/index.aspx?lang=en-us\" data-><strong>Nvidia drivers<\/strong><\/a><\/li>\n<li id=\"b14d\"><a href=\"https:\/\/developer.nvidia.com\/cuda-downloads\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/developer.nvidia.com\/cuda-downloads\" data-><strong>CUDA<\/strong><\/a> \u2014 A proprietary parallel computing platform and application programming interface (API) model created by Nvidia.<\/li>\n<li id=\"1adb\"><a href=\"https:\/\/developer.nvidia.com\/cudnn\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/developer.nvidia.com\/cudnn\" data-><strong>cuDNN<\/strong><\/a> \u2014 Deep Neural Network accelerated library of primitives for Nvidia GPUs.<\/li>\n<\/ul>\n<h4 id=\"a5f4\"><strong>Helper apps<\/strong><\/h4>\n<ul>\n<li id=\"354f\"><a href=\"http:\/\/jupyter.org\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/jupyter.org\/\" data-><strong>Jupyter<\/strong><\/a> \u2014 This is an awesome web app that let\u2019s you share documentation and live code in a single file.<\/li>\n<\/ul>\n<h4 id=\"1c95\"><strong>Frameworks\/Libraries<\/strong><\/h4>\n<ul>\n<li id=\"3d9c\"><a href=\"https:\/\/www.tensorflow.org\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.tensorflow.org\/\" data-><strong>TensorFlow<\/strong><\/a> \u2014 Google\u2019s OpenSource DL framework that powers things like Google Translate.<\/li>\n<li id=\"1dbc\"><a href=\"http:\/\/deeplearning.net\/software\/theano\/\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"broken_link\"><strong>Theano<\/strong><\/a> \u2014 A robust and popular machine learning framework.<\/li>\n<li id=\"a612\"><a href=\"http:\/\/caffe.berkeleyvision.org\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/caffe.berkeleyvision.org\/\" data-><strong>Caffe<\/strong><\/a> \u2014 A deep learning framework that comes out of Berkley.<\/li>\n<li id=\"6b7c\"><a href=\"http:\/\/torch.ch\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/torch.ch\/\" data-><strong>Torch<\/strong><\/a> \u2014 A scientific computing framework with wide support for machine learning algorithms that puts GPUs first.<\/li>\n<li id=\"e81e\"><a href=\"http:\/\/mxnet.io\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/mxnet.io\/\" data-><strong>MXNET<\/strong><\/a> \u2014 Highly scalable DL system backed by Amazon and several universities.<\/li>\n<\/ul>\n<h4 id=\"61c4\"><strong>High Level Abstraction Libraries<\/strong><\/h4>\n<ul>\n<li id=\"f575\"><a href=\"https:\/\/keras.io\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/keras.io\/\" data-><strong>Keras<\/strong><\/a> \u2014 A high-level neural networks library, written in Python that runs on top of either TensorFlow or Theano.<\/li>\n<li id=\"3f8c\"><a href=\"https:\/\/github.com\/Lasagne\/Lasagne\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/Lasagne\/Lasagne\" data-><strong>Lasagne<\/strong><\/a> \u2014 A light weight library to build and train neural networks.<\/li>\n<\/ul>\n<h4 id=\"ae6f\"><strong>Python Libraries<\/strong><\/h4>\n<p id=\"f94f\">There area whole host of libraries that pretty much any scientific computing system will need to run effectively. So let\u2019s install the most common ones off the bat.<\/p>\n<ul>\n<li id=\"6164\"><strong>Pip<\/strong>\u00a0= an installer and packaging system for Python<\/li>\n<li id=\"5ba7\"><strong>Pandas<\/strong>\u00a0= high-performance data analysis<\/li>\n<li id=\"74ea\"><strong>Scikit-learn<\/strong>\u00a0= a popular and powerful machine learning library<\/li>\n<li id=\"3f04\"><strong>NumPy<\/strong>\u00a0= numerical Python<\/li>\n<li id=\"6ead\"><strong>Matplotlib<\/strong>\u00a0= visualization library<\/li>\n<li id=\"b8e6\"><strong>Scipy<\/strong>\u00a0= math and scientific computing<\/li>\n<li id=\"b3ab\"><strong>IPython<\/strong>\u00a0= interactive Python<\/li>\n<li id=\"606f\"><strong>Scrappy<\/strong>\u00a0= web crawling framework<\/li>\n<li id=\"8588\"><strong>NLTK<\/strong>\u00a0= natural language toolkit<\/li>\n<li id=\"77e9\"><strong>Pattern<\/strong>\u00a0= a web mining library<\/li>\n<li id=\"d09b\"><strong>Seaborn<\/strong>\u00a0= statistical visualization<\/li>\n<li id=\"20fa\"><strong>OpenCV<\/strong>\u00a0= a computer vision library<\/li>\n<li id=\"54fc\"><strong>Rpy2\u00a0<\/strong>= an R interface<\/li>\n<li id=\"6e31\"><strong>Py-graphviz<\/strong>\u00a0= statistical graphing<\/li>\n<li id=\"de33\"><strong>OpenBLAS<\/strong>\u00a0= linear algebra<\/li>\n<\/ul>\n<h3 id=\"adaa\"><strong>Linux Workstation Setup<\/strong><\/h3>\n<p id=\"9677\">For cutting-edge work, you\u2019ll want to\u00a0<a href=\"https:\/\/www.ubuntu.com\/download\/server\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.ubuntu.com\/download\/server\" data->get the latest version of Ubuntu LTS<\/a>, which is 16.04 at the time of writing. I\u2019m looking forward to the days when more of the tutorials cover Red Hat and Red Hat derivatives like CentOS and Scientific Linux but as of now Ubuntu is where it\u2019s at for deep learning. I may follow up with an RH centric build as well.<\/p>\n<p id=\"51d5\">Get Ubuntu\u00a0<a href=\"https:\/\/rufus.akeo.ie\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/rufus.akeo.ie\/\" data->burned to a USB stick via Rufus<\/a>.<\/p>\n<p id=\"4457\">Get it installed in UEFI mode.<\/p>\n<h3 id=\"ca89\"><strong>First Boot<\/strong><\/h3>\n<p id=\"94a8\">Your first boot will go to a black screen. That\u2019s because the open source drivers are not up to date with the latest and greatest chipsets. To fix that you\u2019ll need to do the following:<\/p>\n<p id=\"c9fc\">As the machine boots, get to a TTY:<\/p>\n<pre id=\"0654\">Ctrl + Alt + F1<\/pre>\n<p id=\"fc2a\">Get the latest Nvidia drivers and reboot:<\/p>\n<ul>\n<li id=\"0a27\">Log into your root account in the TTY.<\/li>\n<li id=\"b21b\">Run\u00a0<code>sudo apt-get purge nvidia-*<\/code><\/li>\n<li id=\"4a73\">Run\u00a0<code>sudo add-apt-repository ppa:graphics-drivers\/ppa<\/code>\u00a0and then\u00a0<code>sudo apt-get update<\/code><\/li>\n<li id=\"53ec\">Run\u00a0<code>sudo apt-get install nvidia-375<\/code><\/li>\n<li id=\"1219\">Reboot and your graphics issue should be fixed.<\/li>\n<\/ul>\n<h3 id=\"64f2\"><strong>Update the\u00a0machine<\/strong><\/h3>\n<p id=\"573b\">Open a terminal and type the following:<\/p>\n<div id=\"8df7\"><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get update -y<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get upgrade -y<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y build-essential <\/span><span style=\"font-family: courier new,courier,monospace;\">cmake<\/span><span style=\"font-family: courier new,courier,monospace;\"> g++ <\/span><span style=\"font-family: courier new,courier,monospace;\">gfortran<\/span><span style=\"font-family: courier new,courier,monospace;\"> git pkg-config python-dev software-properties-common <\/span><span style=\"font-family: courier new,courier,monospace;\">wget<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get <\/span><span style=\"font-family: courier new,courier,monospace;\">autoremove<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span> <span style=\"font-family: courier new,courier,monospace;\">rm -rf<\/span><span style=\"font-family: courier new,courier,monospace;\"> \/var\/lib\/apt\/lists\/*<\/span><\/div>\n<h3 id=\"9629\"><strong>CUDA<\/strong><\/h3>\n<p id=\"ea88\">Download CUDA 8 from\u00a0<a href=\"https:\/\/developer.nvidia.com\/cuda-toolkit\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/developer.nvidia.com\/cuda-toolkit\" data->Nvidia<\/a>. Go to the downloads directory and install CUDA:<\/p>\n<div id=\"2af2\"><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span> <span style=\"font-family: courier new,courier,monospace;\">dpkg<\/span><span style=\"font-family: courier new,courier,monospace;\"> -i <code>cuda-repo-ubuntu1604-8-0-local.deb<\/code><\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get update -y<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y <\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><\/div>\n<div><\/div>\n<p id=\"aab6\">Add CUDA to the environment variables:<\/p>\n<div id=\"c4f0\"><span style=\"font-family: courier new,courier,monospace;\">echo \u2018export PATH=\/usr\/local\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">\/bin:$PATH\u2019 &gt;&gt; ~\/.bashrc<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">echo \u2018export LD_LIBRARY_PATH=\/usr\/local\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">\/lib64:$LD_LIBRARY_PATH\u2019 &gt;&gt; ~\/.bashrc<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">source ~\/.bashrc<\/span><\/div>\n<div><\/div>\n<p id=\"c767\">Check to make sure the correct version of CUDA is installed:<\/p>\n<pre id=\"860b\">nvcc -V<\/pre>\n<p id=\"268a\">Restart your computer:<\/p>\n<pre id=\"e3b9\">sudo shutdown -r now<\/pre>\n<p id=\"43a4\"><strong>Check your CUDA Installation<\/strong><\/p>\n<p id=\"20ce\">First install the CUDA samples:<\/p>\n<div id=\"4a85\"><span style=\"font-family: courier new,courier,monospace;\">\/<\/span><span style=\"font-family: courier new,courier,monospace;\">usr<\/span><span style=\"font-family: courier new,courier,monospace;\">\/local\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">\/bin\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">-install-samples-*.sh ~\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">-samples<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cd ~\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">-samples\/NVIDIA*Samples<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">make -j $(($(<\/span><span style=\"font-family: courier new,courier,monospace;\">nproc<\/span><span style=\"font-family: courier new,courier,monospace;\">) + 1))<\/span><\/div>\n<div><\/div>\n<p id=\"c24d\">Note that the make section of this command uses +1 to indicate the number of GPUs that you have, so if you have more than one you can up the number and install\/compile will move a lot faster.<\/p>\n<p id=\"6251\">Run deviceQuery and ensure that it detects your graphics card and that the tests pass:<\/p>\n<pre id=\"aa99\">bin\/x86_64\/linux\/release\/deviceQuery<\/pre>\n<h3 id=\"cbc6\">cuDNN<\/h3>\n<p id=\"c8ec\">cuDNN is a GPU accelerated library for DNNs. Unfortunately, you can\u2019t just grab it from a repo.\u00a0<strong>You\u2019ll need to\u00a0<\/strong><a href=\"https:\/\/developer.nvidia.com\/cudnn\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/developer.nvidia.com\/cudnn\" data-><strong>register with Nvidia to get access to it, which you can do right here<\/strong><\/a><strong>.\u00a0<\/strong>It can take a few hours or a few days to get approved for access. Grab version 4 and version 5. I installed 5 in this tutorial.<\/p>\n<p id=\"6646\">You will want to wait until you get this installed before moving on, as other frameworks depend on it and may fail to install.<\/p>\n<p id=\"b61f\">Extract and copy the files:<\/p>\n<div id=\"ecd5\"><span style=\"font-family: courier new,courier,monospace;\">cd ~\/Downloads\/<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">tar xvf cudnn*.tgz<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cd cuda<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo cp *\/*.h \/usr\/local\/cuda\/include\/<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo cp *\/libcudnn* \/usr\/local\/cuda\/lib64\/<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo chmod a+r \/usr\/local\/cuda\/lib64\/libcudnn*<\/span><\/div>\n<div><\/div>\n<p id=\"c39e\">Do a check by typing:<\/p>\n<p id=\"72ca\"><code>nvidia-smi<\/code><\/p>\n<p id=\"851a\">That should output some GPU stats.<\/p>\n<h3 id=\"be6a\"><strong>Python<\/strong><\/h3>\n<div id=\"1a6d\"><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y python-pip python-dev<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get update &amp;&amp; apt-get install -y python-<\/span><span style=\"font-family: courier new,courier,monospace;\">numpy<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-<\/span><span style=\"font-family: courier new,courier,monospace;\">scipy<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-nose python-h5py python-<\/span><span style=\"font-family: courier new,courier,monospace;\">skimage<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-<\/span><span style=\"font-family: courier new,courier,monospace;\">matplotlib<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-pandas python-<\/span><span style=\"font-family: courier new,courier,monospace;\">sklearn<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-<\/span><span style=\"font-family: courier new,courier,monospace;\">sympy<\/span><span style=\"font-family: courier new,courier,monospace;\"> libfreetype6-dev libpng12-dev libopenjpeg5<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get clean &amp;&amp; <\/span><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> apt-get <\/span><span style=\"font-family: courier new,courier,monospace;\">autoremove<\/span><\/div>\n<div id=\"0da2\"><span style=\"font-family: courier new,courier,monospace;\">rm -rf \/var\/lib\/apt\/lists\/*<\/span><\/div>\n<div><\/div>\n<p id=\"e729\">Now install the rest of the libraries with Pip<\/p>\n<pre id=\"b9af\">pip install seaborn rpy2 opencv-python pygraphviz pattern nltk scrappy<\/pre>\n<h3 id=\"ee35\"><a href=\"https:\/\/hackernoon.com\/tagged\/tensorflow\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/hackernoon.com\/tagged\/tensorflow\" data-><strong>Tensorflow<\/strong><\/a><\/h3>\n<pre id=\"617f\">pip install tensorflow-gpu<\/pre>\n<p id=\"54a0\">That\u2019s it. Awesome!<\/p>\n<p id=\"619f\"><strong>Test Tensorflow<\/strong><\/p>\n<p id=\"d75c\"><code>$ python<br \/>\n...<br \/>\n&gt;&gt;&gt; import tensorflow as tf<br \/>\n&gt;&gt;&gt; hello = tf.constant('Hello, TensorFlow!')<br \/>\n&gt;&gt;&gt; sess = tf.Session()<br \/>\n&gt;&gt;&gt; print(sess.run(hello))<br \/>\nHello, TensorFlow!<br \/>\n&gt;&gt;&gt; a = tf.constant(10)<br \/>\n&gt;&gt;&gt; b = tf.constant(32)<br \/>\n&gt;&gt;&gt; print(sess.run(a + b))<br \/>\n42<br \/>\n&gt;&gt;&gt;<\/code><\/p>\n<h3 id=\"564b\"><strong>OpenBLAS<\/strong><\/h3>\n<pre id=\"3f2f\">sudo apt-get install -y <code>libblas-test libopenblas-base libopenblas-dev<\/code><\/pre>\n<h3 id=\"d32b\">Jupyter<\/h3>\n<p id=\"0fb1\">Juypter is an awesome code sharing format that let\u2019s you easily share \u201cnotebooks\u201d with code and tutorials. I will detail using it in the next post.<\/p>\n<pre id=\"f3ee\">pip install -U ipython[all] jupyter<\/pre>\n<h3 id=\"cbf1\">Theano<\/h3>\n<p id=\"5f3f\">Install the pre-requisites and install Theano.<\/p>\n<div id=\"1f4c\"><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y python-<\/span><span style=\"font-family: courier new,courier,monospace;\">numpy<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-<\/span><span style=\"font-family: courier new,courier,monospace;\">scipy<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-dev python-pip python-nose g++ python-<\/span><span style=\"font-family: courier new,courier,monospace;\">pygments<\/span><span style=\"font-family: courier new,courier,monospace;\"> python-sphinx python-nose<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> pip install Theano<\/span><\/div>\n<div><\/div>\n<p id=\"7d78\">Yes that\u2019s a capital in Theano.<\/p>\n<p id=\"ceae\">Test your Theano installation. There should be no warnings\/errors when the import command is executed.<\/p>\n<pre id=\"c4c0\"><code>python\r\n&gt;&gt;&gt; import theano\r\n&gt;&gt;&gt; exit()<\/code><\/pre>\n<pre id=\"5a54\">nosetests theano<\/pre>\n<h3 id=\"0dc0\">Keras<\/h3>\n<p id=\"6b1d\">Keras is an incredibly popular high level abstraction wrapper that can surf on top of Theano and Tensorflow. It\u2019s installation and usage are so dead simple it\u2019s not even funny.<\/p>\n<pre id=\"6eb8\">sudo pip install keras<\/pre>\n<h3 id=\"5358\">Lasagne<\/h3>\n<p id=\"684d\">Lasagne is another widely used high level wrapper that\u2019s a bit more flexible than Keras in that you can easily color outside the lines. Think of Keras as deep learning on rails and Lasagne as the next step in your evolution. The instructions for Lasagne install come from\u00a0<a href=\"http:\/\/lasagne.readthedocs.io\/en\/latest\/user\/installation.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/lasagne.readthedocs.io\/en\/latest\/user\/installation.html\" data->here<\/a>.<\/p>\n<pre id=\"72a7\">pip install -r <a href=\"https:\/\/raw.githubusercontent.com\/Lasagne\/Lasagne\/v0.1\/requirements.txt\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/raw.githubusercontent.com\/Lasagne\/Lasagne\/v0.1\/requirements.txt\" data->https:\/\/raw.githubusercontent.com\/Lasagne\/Lasagne\/v0.1\/requirements.txt<\/a><\/pre>\n<h3 id=\"426b\">MXNET<\/h3>\n<p id=\"f0b1\">MXNET is a highly scalable framework\u00a0<a href=\"http:\/\/www.allthingsdistributed.com\/2016\/11\/mxnet-default-framework-deep-learning-aws.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/www.allthingsdistributed.com\/2016\/11\/mxnet-default-framework-deep-learning-aws.html\" data->backed by Amazon<\/a>.\u00a0It\u2019s install instructions can be found here. An install script for MXNet for Python can be found right\u00a0here.<\/p>\n<h4 id=\"1b2f\">Installing MXNet on\u00a0Ubuntu<\/h4>\n<p id=\"7736\">From the website:<\/p>\n<blockquote id=\"4255\"><p>MXNet currently supports Python, R, Julia, and Scala. For users of Python and R on Ubuntu operating systems, MXNet provides a set of Git Bash scripts that installs all of the required MXNet dependencies and the MXNet library.<\/p><\/blockquote>\n<blockquote id=\"c78a\"><p>The simple installation scripts set up MXNet for Python and R on computers running Ubuntu 12 or later. The scripts install MXNet in your home folder\u00a0<code>~\/mxnet<\/code>.<\/p><\/blockquote>\n<h4 id=\"d33e\">Install MXNet for\u00a0Python<\/h4>\n<p id=\"55b2\">Clone the MXNet repository. In terminal, run the commands WITHOUT \u201csudo\u201d:<\/p>\n<pre id=\"5ba9\">git clone <a href=\"https:\/\/github.com\/dmlc\/mxnet.git\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/dmlc\/mxnet.git\" data->https:\/\/github.com\/dmlc\/mxnet.git<\/a> ~\/mxnet --recursive<\/pre>\n<p id=\"737d\">We\u2019re building with GPUs, so add configurations to config.mk file:<\/p>\n<div id=\"a880\"><span style=\"font-family: courier new,courier,monospace;\">cd ~\/<\/span><span style=\"font-family: courier new,courier,monospace;\">mxnet<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cp make\/config.mk .<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">echo &#8220;USE_CUDA=1&#8221; &gt;&gt;config.<\/span><span style=\"font-family: courier new,courier,monospace;\">mk<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">echo &#8220;USE_CUDA_PATH=\/usr\/local\/<\/span><span style=\"font-family: courier new,courier,monospace;\">cuda<\/span><span style=\"font-family: courier new,courier,monospace;\">&#8221; &gt;&gt;config.<\/span><span style=\"font-family: courier new,courier,monospace;\">mk<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">echo &#8220;USE_CUDNN=1&#8221; &gt;&gt;config.mk<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">Install MXNet for Python with all dependencies:<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cd ~\/<\/span><span style=\"font-family: courier new,courier,monospace;\">mxnet<\/span><span style=\"font-family: courier new,courier,monospace;\">\/setup-utils<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">bash install-<\/span><span style=\"font-family: courier new,courier,monospace;\">mxnet<\/span><span style=\"font-family: courier new,courier,monospace;\">-ubuntu-python.sh<\/span><\/div>\n<div><\/div>\n<p id=\"9442\">Add it to your path:<\/p>\n<pre id=\"9bdb\">source ~\/.bashrc<\/pre>\n<h4 id=\"52cf\">Install MXNet for\u00a0R<\/h4>\n<p id=\"e929\">We\u2019ll need R so let\u2019s do that now. The installation script to install MXNet for R can be found\u00a0here. The steps below call that script after setting up the R language.<\/p>\n<p id=\"a6db\">First add the R repo:<\/p>\n<div id=\"1b14\"><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> echo \u201cdeb <a href=\"http:\/\/cran.rstudio.com\/bin\/linux\/ubuntu\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/cran.rstudio.com\/bin\/linux\/ubuntu\" data->http:\/\/cran.rstudio.com\/bin\/linux\/ubuntu<\/a> xenial\/\u201d | <\/span><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> tee -a \/etc\/apt\/sources.list<\/span><\/div>\n<p id=\"f8c0\">Add R to the Ubuntu Keyring:<\/p>\n<div id=\"acab\"><span style=\"font-family: courier new,courier,monospace;\">gpg<\/span><span style=\"font-family: courier new,courier,monospace;\"> \u2014 keyserver keyserver.ubuntu.com \u2014 <\/span><span style=\"font-family: courier new,courier,monospace;\">recv<\/span><span style=\"font-family: courier new,courier,monospace;\">-key E084DAB9<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">gpg<\/span><span style=\"font-family: courier new,courier,monospace;\"> -a \u2014 export E084DAB9 | <\/span><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> apt-key add &#8211;<\/span><\/div>\n<div><\/div>\n<p id=\"2adc\">Install R-Base:<\/p>\n<pre id=\"8869\">sudo apt-get install r-base r-base-dev<\/pre>\n<p id=\"bf86\">Install R-Studio (altering the command for the correct version number):<\/p>\n<div id=\"8ee3\"><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y <\/span><span style=\"font-family: courier new,courier,monospace;\">gdebi<\/span><span style=\"font-family: courier new,courier,monospace;\">-core<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">wget<\/span><span style=\"font-family: courier new,courier,monospace;\"> <a href=\"https:\/\/download1.rstudio.org\/rstudio-0.99.896-amd64.deb\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/download1.rstudio.org\/rstudio-0.99.896-amd64.deb\" data->https:\/\/download1.rstudio.org\/rstudio-0.99.896-amd64.deb<\/a><\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span> <span style=\"font-family: courier new,courier,monospace;\">gdebi<\/span><span style=\"font-family: courier new,courier,monospace;\"> -n <\/span><span style=\"font-family: courier new,courier,monospace;\">rstudio<\/span><span style=\"font-family: courier new,courier,monospace;\">-0.99.896-amd64.deb<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">rm <\/span><span style=\"font-family: courier new,courier,monospace;\">rstudio<\/span><span style=\"font-family: courier new,courier,monospace;\">-0.99.896-amd64.deb<\/span><\/div>\n<div><\/div>\n<p id=\"03f6\">Now install MXNet for R:<\/p>\n<div id=\"7de9\"><span style=\"font-family: courier new,courier,monospace;\">cd ~\/<\/span><span style=\"font-family: courier new,courier,monospace;\">mxnet<\/span><span style=\"font-family: courier new,courier,monospace;\">\/setup-utils<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">bash install-<\/span><span style=\"font-family: courier new,courier,monospace;\">mxnet<\/span><span style=\"font-family: courier new,courier,monospace;\">-ubuntu-r.sh<\/span><\/div>\n<h3 id=\"425c\"><strong>Caffe<\/strong><\/h3>\n<p id=\"2c25\">These instructions come from\u00a0<a href=\"http:\/\/caffe.berkeleyvision.org\/install_apt.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/caffe.berkeleyvision.org\/install_apt.html\" data->the Caffe website<\/a>. I found them to be a little flaky depending on how the wind was blowing that day, but your mileage may vary. Frankly, I don\u2019t use Caffe all that much and many of the beginner tutorials out there won\u2019t focus on it, so if this part screws up for you, just skip it for now and come back to it.<\/p>\n<p id=\"2a51\">Install the prerequisites:<\/p>\n<div id=\"3a4b\"><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y <\/span><span style=\"font-family: courier new,courier,monospace;\">libprotobuf<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">libleveldb<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">libsnappy<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">libopencv<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev libhdf5-serial-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">protobuf<\/span><span style=\"font-family: courier new,courier,monospace;\">-compiler<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y &#8211;no-install-recommends <\/span><span style=\"font-family: courier new,courier,monospace;\">libboost<\/span><span style=\"font-family: courier new,courier,monospace;\">-all-dev<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sudo apt-get install -y <\/span><span style=\"font-family: courier new,courier,monospace;\">libgflags<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">libgoogle<\/span><span style=\"font-family: courier new,courier,monospace;\">&#8211;<\/span><span style=\"font-family: courier new,courier,monospace;\">glog<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev <\/span><span style=\"font-family: courier new,courier,monospace;\">liblmdb<\/span><span style=\"font-family: courier new,courier,monospace;\">-dev<\/span><\/div>\n<div><\/div>\n<p id=\"e4c0\">Clone the Caffe repo:<\/p>\n<div id=\"62eb\"><span style=\"font-family: courier new,courier,monospace;\">cd ~\/git<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">git clone <a href=\"https:\/\/github.com\/BVLC\/caffe.git\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/BVLC\/caffe.git\" data->https:\/\/github.com\/BVLC\/caffe.git<\/a><\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cd <\/span><span style=\"font-family: courier new,courier,monospace;\">caffe<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cp Makefile.config.example Makefile.config<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">To use cuDNN set the flag\u00a0<code>USE_CUDNN\u00a0:= 1<\/code>\u00a0in the Makefile:<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">sed -i \u2018s\/# USE_CUDNN := 1\/USE_CUDNN := 1\/\u2018 Makefile.config<\/span><\/div>\n<div><\/div>\n<p id=\"7aa0\">Modify the BLAS parameters value to open:<\/p>\n<pre id=\"84dc\"><code>sed -i 's\/BLAS := atlas\/BLAS := open\/' Makefile.config<\/code><\/pre>\n<p id=\"c205\">Install the requirements, then build Caffe, build the tests, run the tests and ensure that the all tests pass. Note that all this takes some time. Note again that the +1 indicates the number of GPUs to build Caffe with, so up it if you have more than one.<\/p>\n<div id=\"e8f6\"><span style=\"font-family: courier new,courier,monospace;\">sudo<\/span><span style=\"font-family: courier new,courier,monospace;\"> pip install -r python\/requirements.txt<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">make all -j $(($(<\/span><span style=\"font-family: courier new,courier,monospace;\">nproc<\/span><span style=\"font-family: courier new,courier,monospace;\">) + 1))<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">make test -j $(($(<\/span><span style=\"font-family: courier new,courier,monospace;\">nproc<\/span><span style=\"font-family: courier new,courier,monospace;\">) + 1))<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">make <\/span><span style=\"font-family: courier new,courier,monospace;\">runtest<\/span><span style=\"font-family: courier new,courier,monospace;\"> -j $(($(<\/span><span style=\"font-family: courier new,courier,monospace;\">nproc<\/span><span style=\"font-family: courier new,courier,monospace;\">) + 1))<\/span><\/div>\n<div><\/div>\n<p id=\"e840\">Build PyCaffe, the Python interface to Caffe:<\/p>\n<pre id=\"3ef5\">make pycaffe -j $(($(nproc) + 1))<\/pre>\n<p id=\"e2df\">Add Caffe to your environment variable:<\/p>\n<div id=\"849e\"><span style=\"font-family: courier new,courier,monospace;\">echo \u2018export CAFFE_ROOT=$(<\/span><span style=\"font-family: courier new,courier,monospace;\">pwd<\/span><span style=\"font-family: courier new,courier,monospace;\">)\u2019 &gt;&gt; ~\/.bashrc<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">echo \u2018export PYTHONPATH=$CAFFE_ROOT\/python:$PYTHONPATH\u2019 &gt;&gt; ~\/.bashrc<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">source ~\/.bashrc<\/span><\/div>\n<div><\/div>\n<p id=\"5ae6\">Test to ensure that your Caffe installation is successful. There should be no warnings\/errors when the import command is executed.<\/p>\n<pre id=\"b0ea\"><code>ipython\r\n&gt;&gt;&gt; import caffe\r\n&gt;&gt;&gt; exit()<\/code><\/pre>\n<h3 id=\"be0c\">Torch<\/h3>\n<p id=\"db4e\">Here are the Torch install instructions from the\u00a0<a href=\"http:\/\/torch.ch\/docs\/getting-started.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"http:\/\/torch.ch\/docs\/getting-started.html\" data->Torch website<\/a>. I\u2019ve had some struggles with this framework installing but this usually works for most people.<\/p>\n<div id=\"444c\"><span style=\"font-family: courier new,courier,monospace;\">git clone <a href=\"https:\/\/github.com\/torch\/distro.git\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/github.com\/torch\/distro.git\" data->https:\/\/github.com\/torch\/distro.git<\/a> ~\/git\/torch \u2014 recursive<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">cd torch; bash install-<\/span><span style=\"font-family: courier new,courier,monospace;\">deps<\/span><span style=\"font-family: courier new,courier,monospace;\">;<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">.\/install.sh<\/span><\/div>\n<h3 id=\"7c7d\"><strong>Scala<\/strong><\/h3>\n<pre id=\"6d71\">sudo apt-get -y install scala<\/pre>\n<h3 id=\"9006\"><strong>Anaconda<\/strong><\/h3>\n<p id=\"a003\">Download\u00a0Anaconda for Python 3.6 right here. It will also have a 2.7.x version as well.<\/p>\n<p id=\"b6c1\">Install it:<\/p>\n<pre id=\"c643\">sudo bash Anaconda3\u20134.3.0-Linux-x86_64.sh<\/pre>\n<p id=\"b795\">Do NOT add it to your bashrc or when you reboot Python will default to Anaconda. It is set to \u201cno\u201d by default in the script but you might be tempted to do it as I was at first. Don\u2019t. You\u2019ll want to keep the default pointed to Ubuntu\u2019s Python as a number of things are dependent on it.<\/p>\n<p id=\"7af6\">Besides Anaconda let\u2019s you create environments that let you move back and forth between versions.<\/p>\n<p id=\"7ed4\">Let\u2019s create two Anaconda environments:<\/p>\n<div id=\"f168\"><span style=\"font-family: courier new,courier,monospace;\">conda create -n py2 python=2.7<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">conda create -n py3 python=3.6<\/span><\/div>\n<div><\/div>\n<p id=\"375d\">Activate the 3 environment:<\/p>\n<pre id=\"e55f\">source activate py3<\/pre>\n<p>Now let\u2019s install all the packages for Anaconda:<\/p>\n<div id=\"f0ff\"><span style=\"font-family: courier new,courier,monospace;\">conda install pip pandas <\/span><span style=\"font-family: courier new,courier,monospace;\">scikit<\/span><span style=\"font-family: courier new,courier,monospace;\">-learn <\/span><span style=\"font-family: courier new,courier,monospace;\">scipy<\/span> <span style=\"font-family: courier new,courier,monospace;\">numpy<\/span> <span style=\"font-family: courier new,courier,monospace;\">matplotlib<\/span> <span style=\"font-family: courier new,courier,monospace;\">ipython<\/span><span style=\"font-family: courier new,courier,monospace;\">-notebook seaborn <\/span><span style=\"font-family: courier new,courier,monospace;\">opencv<\/span><span style=\"font-family: courier new,courier,monospace;\"> scrappy <\/span><span style=\"font-family: courier new,courier,monospace;\">nltk<\/span><span style=\"font-family: courier new,courier,monospace;\"> pattern<\/span><\/div>\n<div><\/div>\n<p id=\"2672\">Now we install pygraphviz and the R bridge with pip which aren\u2019t in Conda:<\/p>\n<pre id=\"7f88\">pip install pygraphviz rpy2<\/pre>\n<p id=\"56bb\">Reboot:<\/p>\n<pre id=\"6836\">sudo shutdown -r now<\/pre>\n<h3 id=\"bd17\"><strong>Install Tensorflow, Theano, and Keras for\u00a0Anaconda<\/strong><\/h3>\n<p id=\"06d3\">You\u2019ll install these libraries for both the Python 2 and 3 versions of Anaconda. You may get better performance using the Anaconda backed libraries, as they contain performance optimizations.<\/p>\n<p id=\"de6d\">Let\u2019s do Python 3 first:<\/p>\n<div id=\"fae1\"><span style=\"font-family: courier new,courier,monospace;\">source activate py3<\/span><\/div>\n<div><span style=\"font-family: courier new,courier,monospace;\">pip install <\/span><span style=\"font-family: courier new,courier,monospace;\">tensorflow<\/span><span style=\"font-family: courier new,courier,monospace;\"> Theano <\/span><span style=\"font-family: courier new,courier,monospace;\">keras<\/span><\/div>\n<div><\/div>\n<p id=\"26fc\">Now deactivate the environment and activate the py2 environment:<\/p>\n<pre id=\"7d70\">source deactivate<\/pre>\n<p id=\"b485\">Activate the Python 2 environment:<\/p>\n<pre id=\"e43d\">source activate py2<\/pre>\n<p id=\"ed68\">Install for py2:<\/p>\n<pre id=\"210f\">pip install tensorflow Theano keras<\/pre>\n<p id=\"3fac\">Deactivate the environment:<\/p>\n<pre id=\"6cc3\">source deactivate<\/pre>\n<p id=\"465f\">Now you\u2019re back in the standard Ubuntu shell with the built in Python 2.7.x with all the frameworks we installed for the standard Python that comes with Ubuntu.<\/p>\n<h3 id=\"aadb\"><strong>Conclusion<\/strong><\/h3>\n<p id=\"7f71\">There you have it. You\u2019ve purchased a top notch machine or a budget-friendly alternative. You\u2019ve also got it setup with the latest and greatest software for deep learning.<\/p>\n<p id=\"f22f\">Now get ready to do some heavy number crunching. Dig up a tutorial and get to work! Be on the look out for the next article in my series, which dives into my approach to the\u00a0<a href=\"https:\/\/www.kaggle.com\/c\/data-science-bowl-2017\" target=\"_blank\" rel=\"noopener noreferrer\" data-href=\"https:\/\/www.kaggle.com\/c\/data-science-bowl-2017\" data->Kaggle Data Science Bowl 2017<\/a>, which races to beat lung cancer for a chance at prizes totaling one million dollars.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article guides you through getting a powerful deep learning machine setup and installed with all the latest and greatest frameworks. We&rsquo;re going to build our own&nbsp;Deep Learning Dream Machine. We&rsquo;ll source the best parts and put them together into a number smashing monster. We&rsquo;ll also walk through installing all the latest deep learning frameworks step by step on Ubuntu Linux 16.04. This machine will slice through neural networks like a hot laser through butter.<\/p>\n","protected":false},"author":393,"featured_media":24215,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-post-2.php","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97],"ppma_author":[2209],"class_list":["post-1000","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence"],"authors":[{"term_id":2209,"user_id":393,"is_guest":0,"slug":"daniel-jeffries","display_name":"Daniel Jeffries","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","user_url":"","last_name":"Jeffries","first_name":"Daniel","job_title":"","description":"Dan Jeffries is an author, engineer and serial entrepreneur. During his two decades in the computer industry, he&#039;s covered a broad range of tech from Linux to networks and virtualization.&nbsp;"}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1000","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/393"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=1000"}],"version-history":[{"count":7,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1000\/revisions"}],"predecessor-version":[{"id":27953,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/1000\/revisions\/27953"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/24215"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=1000"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=1000"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=1000"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=1000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}