facebook-pixel
  • Artificial Intelligence
  • Ben Dickson
  • MAR 14, 2019

The cloud is becoming AI’s bottleneck

Electronic board toned blue

Image credit: Depositphotos

In the past decade, artificial intelligence has escaped the confines of research labs and found its way into many of the things we do every day. From online shopping and content recommendation to healthcare and self-driving cars, we are interacting with AI algorithms, in many cases without even knowing it.

But we’ve barely scratched the surface, many believe, and artificial intelligence has much more to offer. Ironically, one of the things that is preventing AI from realizing its full potential is the cloud, one of the main technologies that helped usher AI into the mainstream.

“The reason we still don’t see AI everywhere is not that the algorithms or technology are not there. The main reason is cloud dependency,” says Ali Farhadi, co-founder and CXO at Xnor, a Seattle-based AI hardware startup.

Edge AI, the collective name for hardware and software that enable the performance of AI tasks without a link to the cloud, has gained much steam in the past few years. And as Farhadi and many other experts believe, the edge (or the fog, as some like to call it) will unlock many new AI applications that weren’t possible before.

The costs of cloud-based artificial intelligence

AI owes its recent rise in popularity to advances in deep learning algorithms and neural networks. But one of the main limits of neural networks is their requirements for vast amount of compute resources, which are mostly available in public cloud platforms.

“AI algorithms are very demanding on compute, memory and power. Those have been very limiting factor at scaling AI use cases,” Farhadi says.

There are many ways AI’s dependency on the cloud becomes problematic. “Many use cases of AI will happen at the true edge. They need to run 24/7 and multiple times per second,” Farhadi says, adding that running those many AI inferences in the cloud would be too costly for many use cases and businesses.

Connectivity to the cloud also imposes costs on the devices that will be running the applications at the edge. Every one of those AI-powered devices will need expensive communication hardware modules as well as a network infrastructure that can support its connection to the cloud. For instance, imagine wanting to deploy an array of smart sensors in a large industrial complex. The individual costs of those devices plus all the switches and networking devices that will connect them to the cloud are truly excessive.

Even in smaller and well-connected settings such as smart homes, AI-powered devices impose costs that most people can’t bear. An interesting example is smart security cameras, devices that use AI algorithms to detect intruders or safety incidents.

“These things cost $200 per device,” Farhadi says. “In a true smart home, I actually have to have 20-30 cameras. I would go bankrupt if I wanted to install these many devices in my home. And there will also be a constant running cost of AI computation in the cloud.”

The limits of AI in the cloud

Cloud computing

Costs are not the only problem of cloud-based artificial intelligence. In many settings, a connection to the cloud is either non-present or unstable. This limits some of the very important use cases of AI such as agriculture, where computer vision algorithms and other AI techniques can help in precision farming. But farms are usually located in areas where getting a stable broadband internet connection is a real challenge.

Another example is automated rescue drones, which need to work in environments where communications infrastructure is weak or has been damaged due to natural disasters. Again, without the AI cloud, the drones won’t be able to function properly.

Latency is another challenge of the cloud. As AI algorithms find their way into the physical world, they need to perform many of their tasks in real time. An example is self-driving cars, which use deep learning algorithms to detect cars, objects and pedestrians. In some situations, they need to make split-second decisions to avoid fatal collisions.

“I would never get into a car that is going to be driven by an algorithm that runs in the cloud,” Farhadi says, naming several ways that things can go wrong when a car’s AI algorithms are being processed in the cloud.

Another issue that has turned into a pain point for the AI industry is the privacy concerns of sending your data to the cloud. Many people are not comfortable with purchasing devices and applications that are constantly streaming audio and video from their home to the cloud to be processed by an AI algorithm.

“I don’t have any smart security cameras, because I just don’t want a picture of my daughter’s room to be uploaded somewhere in the cloud, even though Google and others will say this is very secure,” Farhadi says. “It’s just basically a major security concern.”

AI’s power consumption problem

Having a constant connection to the internet and streaming data to the cloud consumes a lot of electricity. This has become a pain point for many use cases of artificial intelligence, especially in fields like wearables and always-sensing devices that don’t have a mains power supply and are running on batteries.

“If you have to replace a battery every three or six months, people won’t want to use it,” Farhadi says. This is especially true in settings like smart cities, where the numbers of devices can reach hundreds of millions.

Power consumption also creates an environmental problem. “If the future we’re depicting is true, we’ll be surrounded with lots of devices that are going to make our lives much easier and simpler. That means we’re going to have billions of AI-powered devices. And if you want to do it with a cloud-based AI solution, the carbon footprint of doing that many inferences in the cloud would damage the planet significantly,” Farhadi explains. “When you think about these problems and scale, power has become one of the biggest issues that we have.”

The power of edge AI

artificial intelligence brain

In the past years, AI hardware has become a growing industry, giving rise to an array of startups that are creating specialized chips for performing AI tasks. Many of the efforts are focused on bringing AI closer to the edge and reducing dependencies on the cloud.

Xnor is one of several companies developing edge AI devices, but what makes them different is their focus on costs and energy consumption. Two years, ago, the company created an object detection AI that could run on a Raspberry Pi Zero. The model running on the Raspberry Pi was a 16-layer object detection neural network. In other words, it was a very expensive algorithm running on a cheap and weak platform. They called it the five-dollar deep learning machine.

https://youtu.be/Na42U8_Nuxg

More recently, Xnor took a step further and developed a smaller and more efficient device that replaces the $5 Pi Zero with a two-dollar field-programmable gate array (FPGA). Xnor’s new edge AI contraption runs a 30-layered convolutional network, a fairly complicated AI model, and can analyze at 32 framed per second.

However, what makes the new device interesting is that it runs solely on solar power and needs no other power supply. According to Farhadi, with a coin-cell battery, the device could run round the clock for 32 years.

“If you want to scale everywhere, you have to be able to run on commodity platforms,” Farhadi says.

Low-cost, low-power AI devices that can run independent of the cloud can open the way for many new use cases. They also solve the privacy issue associated with many current AI solutions. Xnor’s smart camera only needs to send a few bits every time it detects an object.

Since its data transfer requirements for edge AI devices are very low, they can run on communication platforms such as LoRa, which can send data at low rates as far as 20 miles. This again reduces the costs of networking equipment as well as power consumption levels on the device.

“We’re going after enabling everything around us to become smarter than it is with literally no cost, on existing compute platforms, maybe no battery or a battery every decade or so,” Farhadi says.

An area where low-power edge AI can help are smart wearables, where privacy and power consumption are both serious issues.

Settings such as smart homes and smart cities, where scale is challenge, can also benefit from the cost efficiency of innovations such as Xnor’s low-cost deep learning device. Farhadi says that replacing the FPGA for an application-specific integrated circuit (ASIC) would drop the cost from two dollars to a few cents.

Also, in environments where you either have to wire thousands of devices or change their batteries every so often, having self-sufficient AI devices that can run on their own power source for decades save a lot of manual maintenance effort.

“We want to revolutionize the way people think about AI at the edge, and maybe change the definition of AI at the edge,” Farhadi says.

Finding the balance between cloud and edge AI

Low hanging white clouds in front of a mountain with dark moody

Image credit: Depositphotos

Does innovation at the edge mean that cloud AI will disappear? Probably not.

“Cloud was extremely helpful in getting AI out of the labs and into the market. It served AI really well. One of the reasons we saw such a boost in AI was the cloud,” Farhadi says.

Farhadi believes that some artificial intelligence applications will remain native to the cloud, including applications that need a lot of updates and applications that require a lot of data bandwidth to and from the device.

Farhadi also acknowledges Edge AI doesn’t come without trade-offs. For one thing, the flexibility of and compute power of cloud can’t be replicated at the edge. “You can’t analyze video at hundreds of frames per second on a two-dollar device,” he says.

But for most applications, the functionality offered by resource-constrained, edge AI devices is more than enough.

“We’re not advocating that everything has to go to the edge. But there are a lot of AI applications that don’t really need to run in the cloud. That’s what we’re after,” Farhadi says.

In the long run, artificial intelligence will be divided into cloud-native, edge-native and hybrid applications. The point, Farhadi believes, is to understand the opportunities and power that lies in edge AI and to choose solutions that best serve each use case. “Cloud is not the only solution. There’s a spectrum,” he says.

The Harvard Innovation Lab

Made in Boston @

The Harvard Innovation Lab

350

Matching Providers

Matching providers 2
comments powered by Disqus.