Over the past six months, an increasing number of mobile manufacturers and developers have joined the race of mobile AI, while AI appeals to more consumers. In the meantime, platforms, solutions and cutting-edge technologies for mobile AI have seen rapid development. The history of mobile phones consists of many exciting moments, and the introduction of AI is going to be the next breakthrough.
The introduction and development of mobile AI don’t simply improve mobile features. With improvement in mobile capabilities, algorithms, development platforms, hardware, software and sensors, we will probably witness permanent transformation where people coexist with mobile devices. In other words, mobile AI may change everything from fixed handset features such as photography, gaming, and translation, to our lifestyle including travel, business, and family activities.
While we are all curious about what lies ahead, the mobile AI race is highly competitive, or even a bit chaotic. Some companies use AI as a marketing gimmick, which leads to concerns that “bad money drives out good”. We believe it is time for us to dig into mobile AI and come up with some insight.
Say the map of mobile AI is unfolding before us. Let’s explore this mysterious world together and go through its milestones one by one.
First of all, let’s talk about the past – or rather a few months ago when AI met mobile phones.
“Preliminary stage”: encounter of AI and mobile phones, and its challenges
Mobile AI got attention just a few months ago. Let us go back 70 years if we want to know how AI met mobile phones. Ever since AI was defined in 1951, academia believed that it could do three things: Talk, see, and think like a human being.
To make that happen, computer scientists and mathematicians worked for decades to deliver various solutions ranging from logic systems, expert systems, to machine learning.
What’s interesting though is that, while smartphones have dominated our lives, those three things will satisfy our needs going forward – dialogues free us from touchscreens; machine vision diversifies photography, filming, and image processing; while machine learning based on multivariate data allows mobile phones to understand users’ habits and needs.
That way, the future of mobile phones lies in AI. But it is not that easy for the mobile industry to put AI to good use. This may be called the “preliminary stage” of mobile AI.
For example, iPhone impressed global users with its voice assistant Siri. Siri evolved because of AI-driven voice interaction and semantic understanding. In the early days, it was simply a Q&A template. It was AI that made it smarter and smarter.
Apple also used AI to identify and label images.
In addition to voice and image, AI learned, too. In 2016, Huawei Honor launched Honor Magic, making it the first concept phone that used AI to understand user information and provide services.
Beyond that, AI is seen elsewhere on mobile phones. For example, AI-based spatial algorithm helps produce better photos for many dual-camera handsets.
All these applications face a common problem: The AI neural network and convolution operation model are different from traditional computing and image processing. Traditional CPU+GPU-based mobile computing is inefficient and energy-consuming. Granted, iPhone classifies images. But image identification computing on mobile phones is very slow and relies on cloud computing. That’s why this is done at night, which leads to poor user experience because users cannot have their pictures classified until the next day.
Although people appreciated Honor Magic’s recommended services, AI computing made the device extremely energy-hungry, which was a big challenge.
Before 2017, the whole smartphone industry agreed that AI was slow and energy-consuming despite many amazing features. How can we solve this problem?
Birth and growth: AI duopoly
The history of technologies shows that any breakthrough at key turning points can transform the whole industry – Oil-fired machines powered steamers and vehicles, while alternative current altered lighting and electrical devices.
Back to mobile AI. Given that CPU and GPU were not the best options to handle complex AI computing, why not develop a dedicated AI processing unit based on deep learning and neural network?
By making full use of deep learning, the NPU can process convolution, transfer and other deep learning tasks much faster than other solutions, offering much faster AI processing performance. Data shows that when handling the same AI task, Huawei’s heterogeneous computing architecture improves energy efficiency and performance by about 50 times and 25 times respectively. It can recognize 2,000 images per minute.
However, some reporters suggested that Huawei’s AI capability was just for show, and would never be widely accepted.
In October, Huawei launched its flagship model HUAWEI Mate 10 which was widely marketed for photography, image recognition, and user services. Huawei Honor followed suit. The latest HUAWEI P20 is no exception and has broken a new record in DxOMark’s mobile camera ranking.
So it is safe to say that all the flagship models of Huawei, Huawei Honor and Apple to be launched this year will feature AI capabilities, a solid foundation for innovative features.
Mobile AI has flourished in just a few months. It was reported that Apple and Huawei, the only two masters that have developed specialized AI capabilities, are going to face a duel in mobile AI.
Google Pixel 2 also crammed a specialized image processing unit (IPU) into the camera, while Samsung offered AI experience through new voice interaction functions. This means more players have joined the race of mobile AI. But one thing remains unchanged: the specialized processing capacity is the basis of AI experience.
Consensus on AI development: Why does AI need on-device computing?
It seems that Huawei, Apple and Google all agree that the AI processing unit goes before AI experience. But the reason behind that remains a mystery for smartphone reviewers and analysts.
Many of us may have used our phone to detect a flower, which is extremely helpful if we go hiking in spring. However, this iconic AI function is very slow. Worse, it takes time for the app to identify the flower and much longer with poor networks. This is because image recognition requires enormous computing capabilities and we have to upload the data to the cloud for data mapping. So we cannot get the result instantly.
As a matter of fact, CPU and GPU can deal with AI tasks, too, in the same way CPU handles images. The problem is this process consumes too much energy and causes delays without specialized computing power. Lags and high power consumption may be acceptable for flower identification, but certainly not for AI optimization and recognition during live broadcast.
This is the first reason why the AI processing unit is used to complete complex AI tasks on the device: It is faster, real-time, and energy-efficient.
On the other hand, Facebook raised public concerns recently due to the data leak scandal, and similar issues also happened to Google and Apple. In the AI era, it is an irresistible trend for users to upload their voices, images and videos to the system for data recognition and optimization. But it seems odd to hand over their data to a cloud server thousands of miles away.
Users may not worry too much about their landscape pictures, but most of them may hesitate to upload photos and videos of themselves or their loved ones to the cloud for AI processing at the risk of data leak.
This is the second reason why mobile phones must be equipped with an AI processing unit: processing on the device is much more secure. A better way to do that is “cloud-device synchronization”, i.e. training the machine on the cloud and serving users through the device. However, on-device AI processing capacity is indispensable and tough.
Speaking of the history of mobile AI, one thing should be noted: What kind of AI experience do we expect?
After HUAWEI Mate 10 unveiled the scenario-based photography mode last year, Xiaomi Mi MIX 2S presented a similar feature. We believe that more features like that will emerge in the near future and will be widely adopted on mainstream models this year.
But is there anything new about AI? Steve Jobs’ biggest contribution to smartphones is that he put multiple apps or experiences into one single smartphone. AI is supposed to offer more options and spark our imagination. But why is it that we are busy copying each other?
It is easy to develop an AI algorithm (and even easier to be a copycat), but to make all AI functions work together is indeed a challenge. That is exactly the third reason to complete AI computing on the device: to accelerate AI processing through hardware enhancement, and to drive healthy growth for the whole ecosystem.
No matter how good an AI idea is, it is useless if mobile phones are merely designed with AI computing power and simple AI functions without any open interfaces.
Obviously, it is not difficult to choose whether to launch a few AI functions to test the water, or to bet millions of developers’ ideas on the outcome of the AI ecosystem.
Therefore, the birth and evolution of mobile AI boil down to one simple question: How should we develop AI? The answer is to take the most challenging route, from building AI capabilities, to platforms, to the whole ecosystem.
After all, AI is not magic. It won’t flourish without nutrients.