A new class of intelligent machines by moving AI data to the edge
Going forward, much of the data from electronic gadgets that flies up to the cloud will stick closer to Earth, processed in hardware that lives at the so-called edge—for example, inside security cameras or drones.
That’s why Nvidia, the processor company whose graphics processing units (GPUs) are powering much of the boom in deep learning, is now focused on the edge. Deepu Talla, Vice President and General Manager of the company’s Tegra business unit, says bringing AI technology to the edge will make a new class of intelligent machines possible.
“These devices will enable intelligent video analytics that keep our cities smarter and safer, new kinds of robots that optimize manufacturing, and new collaboration that makes long-distance work more efficient,” he said in a statement.
Why the move to the edge? At a press event held Tuesday in San Francisco, Talla gave four main reasons: bandwidth, latency, privacy, and availability. Bandwidth is becoming an issue for cloud processing, he indicated, particularly for video, because cameras in video applications such as public safety are moving to 4K resolution and increasing in numbers.
“By 2020, there will be 1 billion cameras in the world doing public safety and streaming data,” he said. “There’s not enough upstream bandwidth available to send all this to the cloud.” So, processing at the edge will be an absolute necessity.
Latency, he said, becomes an issue in robotics and self-driving cars, applications in which decisions have to be made with lightning speed. Privacy, of course, is easier to protect when data isn’t moving around. And availability of the cloud, Talla pointed out, is an issue in many parts of the world where communications are limited.
“We will see AI transferring to the edge,” he said, with future intelligent applications using a combination of edge and cloud processing.
Nvidia, of course, wasn’t painting this glowing picture of edge computing without some self-interest. At the event, the company announced its new edge-processing platform, the Nvidia Jetson TX2. This credit card–size module is a plug-in replacement for the company’s Jetson TX1, designed for embedded computing. Depending on how it is applied, it can either run at twice the speed of its predecessor or use half the power. Detailed specs are here. The developer kit costs $600, $300 for educators; the production version will sell for $400.
Developers, showing off their work at the launch event, were happy to point out how they are using or intend to use internal AI processing at the edge. A few examples:
- Cisco demonstrated a new collaboration device that will work with its Spark Board system and use AI to recognize people in the room, automatically select a field of view that emphasizes participants instead of empty chairs and adjust in response to people coming and going, and zoom in on people speaking.
- Artec, a company with impressive 3D scanning technology that I’ve previously covered, showed a new scanner in development that extracts geometry and color information and stitches together a 3D file in real time; its previous scanners needed to be connected to a computer.
- Teal Drones showed off a $1300 smart drone, shipping in three months, that can understand and react to what its cameras are seeing. Bob Miles, product and project manager, says putting AI on board will distinguish his drone from competitors. “My father, a farmer in Australia, spends about a third of his time counting cattle,” Miles told me. He thinks a smart drone could do that for him. He also imagines some fun apps—like playing hide and seek with your drone—as well as some more serious ones, like distinguishing aggressors from non-aggressors for law enforcement purposes.
- EnRoute, another drone company, uses AI onboard to navigate and avoid objects. Moving to the Jetson TX2 from the current TX1, said Nvidia’s Barrett Williams, will enable the Zion drone to fly faster and still avoid objects.
- Live Planet introduced a $10,000 4K 360-degree 3-D camera for live streaming of video; it uses AI to encode the 3D video in real time. “The camera produces a stream of 65 gigabytes,” chief strategy officer Khayyam Wakil said, far too much data to transmit to a cloud server. “We couldn’t do this product before Jetson,” he said.
Comments are closed, but trackbacks and pingbacks are open.