Apple set a precedent 10 years ago, ushered in an outbreak in 2018
In 2011, Apple integrated Siri in the iPhone 4S, which allows users to perform a series of operations on the phone through voice, including searching for information, adding calendar events, and even composing text messages and emails. Since then, Apple has continued to optimize the voice assistant experience and add it. Function.
At the beginning, the voice assistants in the Android camp were mainly third-party software, which had relatively large functional limitations.
major Android manufacturers began to make efforts,Integrating their own voice assistants in mobile phones, many niche brands have also followed up, which has become an inevitable point to be mentioned at the press conference and regarded it as a major selling point of the product.
And soon from the beginning of the flagship product standard configuration, to the middle and high-end to complete the popularization, opening up a new track.
With Apple’s Siri’s “learning from the past”, Android manufacturers will be more familiar with what functions should be implemented in the voice assistant. Many functions have surpassed Apple Siri. For example, at the time, Apple’s Siri could not be used to send red envelopes to other WeChat friends by voice. .
The voice assistants of Android manufacturers can do it.
Regarding why the major mobile phone manufacturers vigorously promoted intelligent voice assistants,I think it’s because the voice assistant was one of the most obvious applications that showed the AI function of mobile phones at the time.At the same time, it also adds a new way of interaction for users.
The function is strengthened, the use scene is limited and the efficiency is not high
It’s really easy to impress consumers from scratch. Mocking voice assistants has become the pleasure of many users. Of course, voice assistants are not just used for teasing.
It can help users achieve many functions,
let it play songs, check weather and date, open APP, tell you a joke, send WeChat messages or red envelopes to friends, send text messages, call, order takeaway, set alarm clock, turn on camera, download software , Search, play videos, find mobile phones, etc. The functions are quite rich, and new functions are constantly being added.
The AI voice assistant will also learn the user’s usage habits and optimize the voice interaction experience.
Some manufacturers have added custom instructions (skill learning) for voice assistants, which are triggered by the set keywords, and then the voice assistant will help you implement related functions. Of course, the premise is that you have to teach it “hands-on”.
there are also combined commands. For example, I said to the voice assistant: I’m going for a run, and it will help me turn on the Bluetooth, open the music software, and adjust the volume to 50%, which is very interesting.
In recent years, the AI computing power of mobile phone chips has been continuously improved, and voice recognition technology has also been continuously improved. The semantic recognition capability, voice recognition accuracy and response rate of voice assistants have been significantly improved compared with the previous ones.
Although the functions of the voice assistant on mobile phones are becoming more and more abundant and the experience is constantly improving, it is still difficult to become a function that most users use frequently. I think there are two main reasons: limited use scenarios and low efficiency.
The first is the usage scenario.
For most people, a large part of the day is on the commute and company. These are public places.
It will inevitably be a bit embarrassing to call it “Hi XX” in the subway, bus or company, let it broadcast the weather, make a call, or send a voice to others.Most people don’t do this. At most, they ask about the weather at home or when there are not many people.
Even if you are willing to control by voice, you still need to consider the issue of privacy. Compared with devices such as smart speakers at home, mobile phones are often used in public and it is difficult to perform the functions of voice assistants.
Of course, there are also some scenes that are very suitable for voice assistants.
For example, when it is not convenient at home, you can use the voice assistant to send messages or make calls to others; or while driving, use the voice assistant to send messages or set up navigation, etc., in some relatively private scenes, the voice assistant can Help users solve some problems.
But these are not very high-frequency usage scenarios after all, and it is difficult to cultivate the usage habits of most users.
Regarding the issue of efficiency,
the first thing to say is whether the voice assistant can understand what we are saying. At present, the semantic recognition ability and the accuracy of speech recognition have been significantly improved compared with previous years.
But Chinese is broad and profound, and different contexts have different meanings.Sometimes the meaning of a word is different, even if it can correctly recognize the voice, it can’t understand what kind of command it is.
It is impossible to answer and give feedback correctly, which requires us to repeat the requirements, or to use it to understand and understand the words to give orders. We wanted to save trouble and move our mouths, but in the end it is much more troublesome than manual operation, and the operation efficiency is low. In this way, users will not be willing to use the voice assistant to operate.
there is a learning cost to use a voice assistant, which is good for some young people, but it is more difficult for some older people.
It is still difficult to break this learning barrier, so they are more willing to use traditional methods of operation. At present, there is a set of mature interaction methods. Why should we spend the cost to learn another one that is not efficient?
Voice assistants will adapt to users by learning, but in many cases they still have to rely on users to cater to it in order to finally achieve the desired operation.
In the final analysis,
the current AI voice assistant is executing the user’s instructions, and cannot really conduct conversations and services with the user like a human.
The lack of usage scenarios and low operating efficiency have caused that although voice assistants have become a standard feature of smartphones, not many people actually use them.
Still need to continue to work on voice assistants, the future can be expected
I feel that although the voice assistant on mobile phones is currently in a somewhat awkward situation, it cannot be completely denied. It will play a very important role in many scenarios. If it is really not available at all, mobile phone manufacturers would have given up on it a long time ago. Up.
Although manufacturers will not overemphasize voice assistants,
the functions and experience are still being continuously updated and upgraded, and they are still working silently.
Exploring more usage scenarios, improving speech recognition rate, semantic understanding capabilities, optimizing operating procedures, and building related ecosystems are still what manufacturers need to continue to do.
I am still looking forward to the future of mobile phone voice assistants. Although voice assistants have appeared on mobile phones for nearly ten years, their rapid development has been in recent years.
Still relatively new functions, it takes time to develop. I believe that with the development of technology, voice assistants will be able to communicate with users like humans in the future, understand and fulfill their needs.
Becoming an intelligent butler like “Jarvis” in Iron Man can provide us with all the information we want. Communication and search are even more important. At the same time, it is the hub of all equipment at home, allowing us to “talk and respond.” .