Google taps Movidius to bring machine learning to the handset
Movidius, a vision processing partner in Google’s Project Tango modular handset platform, is expanding its relationship with the search giant into artificial intelligence, aiming to bring machine learning out of the massive server and into the handset.
The Irish company’s Myriad processors are used in Tango and support 3D and HD streaming at very low power levels, supporting new vision and motion tracking applications. That enabled developers to create user experiences which included augmented reality and 3D spatial awareness. With the fruits of the latest collaboration, those smartphone and wearables experiences could become significantly more intelligent.
As CEO Remi El-Ouazzane told EETimes, the companies are working to deploy Google’s neural networking engine on a Movidius computer vision platform, so that it can be used locally in a device, even when that is not connected to the Internet. Google will buy the smaller firm’s vision SoCs and license its software development environment.
Deep learning models will be extracted from Google’s data centers and run on the mobile devices. Movidius’s vision processor will be able to use these to “detect, identify, classify and recognize objects, and generate highly accurate data, even when objects are in occlusion,” said El-Ouazzane. Google and its machine intelligence group in Seattle will develop commercial applications based around this deep learning platform, an effort which will challenge IBM’s Watson initiative in some areas.
IBM has also been working on mobile implementations of its famous AI engine, which has a growing developer ecosystem and commercial service portfolio around it, and its own business unit. Big Blue will be highly aware that, while IBM machines have starred in early competitions against humans (Deep Blue in chess, Watson itself in Jeopardy), it was a Google-programmed machine which this week scored the latest, and most challenging, such win, defeating a human champion at the fiendish Chinese game of Go.
These races are not just about computer science kudos, but about being at the forefront of developing the next generation of user experiences and applications, and therefore having a headstart in monetizing them. Google, Facebook, Baidu and others are investing heavily in AI engines and in new device platforms incorporating virtual reality, 3D and enhanced context awareness. Now all these could come together on the mobile gadget (or driverless car), and to ensure it gets there first, Google is likely to expand its range of partnerships, while insisting that those partners optimize their technologies for its software.
While Google’s high profile efforts in autonomous vehicles are obvious targets for client-side AI, the first priority for Movidius is to get its chip into mobile devices. El-Ouazzane said in the interview that his firm’s embedded vision SoC was “to the IoT space, as Mobileye’s chips are to the automotive market.” Mobileye is the market leader in vision chips for ADAS (Advanced Driver’s Assistance Systems).
With Google’s backing and the neural networking capabilities, Movidius has a significant chance to establish leadership in embedded vision chips, an area which is, for now, fairly under-populated. There are various vision platforms with AI for larger platforms such as gaming consoles and cars, but few that are sufficient low cost and low power for consumer devices like wearables. NXP’s Cognivue division, for instance, focuses only on automotive. Qualcomm is an exception with its Zeroth machine intelligence platform, which supports embedded vision and has been shown off in Snapdragon SoCs for ADAS, smartphones and robots. But Google says it chose Movidius because its latest chip, the Myriad 2 MA2450, is the only commercially available product that can do machine learning on the device.