DED9

Machine Learning Changes The Way You Work With A Smartphone

This Article Examines The Various Stages Of Machine Learning Development In Smartphones And Predicts Their Future.

Machine Learning, The smartphone chip has come a long way since the early days and has gone through many problems. The vast majority of low-cost phones did not have much power until a few years ago, But today’s mid-range smartphones perform as well as the flagships of a year or two ago.

According to Androidauthority, now that an average smartphone can do everyday public tasks, chipmakers and developers have set higher goals.

With this in mind, it is clear why ancillary technologies such as artificial intelligence and machine learning (ML) are in the spotlight; But what does it mean to learn machine learning on smart devices, especially for end-users like you and me?

Data related to machine learning tasks needed to be sent to the cloud for processing in the past.

This approach has many downsides, from slow response times to privacy concerns and bandwidth constraints, all of which were part of the problem. Still, modern smartphones can continue to work completely offline thanks to advances in chip design and machine learning research.

To understand the implications of this development, let us examine how machine learning has changed the way we use smartphones daily.

The introduction of machine learning into smart devices; Improved photography and text prediction

In the mid-2010s, we saw an industry-wide competition to improve the camera’s image quality, which intensified year by year. This, in turn, is the main motivator for accepting machine learning.

The manufacturers found that this technology could help reduce the gap between smartphones and specialized cameras; Even if it has lower hardware to boot.

To this end, almost all large technology companies have improved the performance of their chips in machine learning tasks. By 2017, Qualcomm, Google, Apple, and Huawei all introduced smartphones with their own machine learning accelerators.

Over the years, smartphone cameras have greatly improved, especially in dynamic range, noise reduction, and low light photography.

Recently, manufacturers such as Samsung and Xiaomi have found new uses for this technology.

The former Single Take feature, for example, now uses machine learning to create a high-quality album from a 15-second video clip automatically, and Xiaomi’s use of the technology has evolved from recognizing objects in a frame to replacing the entire sky.

Many Android smartphone makers now use machine learning on their devices to automatically tag faces and objects in the phone gallery; This feature was previously only available through cloud-based services such as Google Photos.

Of course, the efficiency of machine learning in smartphones goes far beyond photography, and text-editing applications have been using this technology for years.

Swiftkey was perhaps the first app to use the neural network to better predict keywords in 2015.

The company claims to have trained its model with millions of sentences to understand the relationship between different words better.

A few years later, Android Wear 2.0, now known as Wear OS, predicted responses to incoming messages, and another prominent feature of machine learning emerged.

Google later renamed this feature Smart Reply and introduced it in Android 10. You have probably used this feature many times while working with your device.

Sound and augmented reality; A harder path to continue

Smartphone machine learning has reached maturity in the field of text prediction and photography. However, computer vision and computer vision are two areas that are still making significant progress every few months.

For example, the Google Camera instant translation feature instantly translates external text and shows it to the user.

Even if the results are not as accurate as the online equivalent, this feature can be handy for travelers who do not know the language of their destination.

High-quality body movement tracking is another futuristic feature of augmented reality that can be achieved by machine learning.

Imagine the LG G8’s Air Motion capability being infinitely smarter for larger applications such as sports tracking or even sign language interpretation.

In the field of speech, voice recognition and dictation have been in development for more than a decade; But in 2019, smartphones were able to do just that offline.

To investigate, run the Google Recorder app, which uses machine learning technology on the device to transcribe speech instantly and automatically.

The conversation is saved as editable text and can be searched; A feature that is very useful and efficient for journalists and students.

The same technology also offers Live Caption capability.

This feature in Android 10 and above automatically generates subtitles for every file that plays on your phone. This feature will help you if you want to decode the content of an audio clip in a noisy environment.

These features are attractive and practical in themselves, But there are different ways to improve them in the future.

For example, improved voice recognition features can enable faster interaction with virtual assistants, even for unusual accents.

Google Assistant has the ability to process voice commands, But this functionality is unfortunately limited to the Pixel product line. Of course, with such an example, one can still get an overview of future technology.

Personalization; The next frontier for machine learning in the machine

The vast majority of today’s machine learning programs rely on pre-trained models that are prematurely built on powerful hardware. It only takes a few milliseconds to deduce a solution from such a pre-trained model (such as intelligent text response in Android).

Currently, only one unit is taught by the developer and distributed to all required phones. Still, this approach, which is suitable for everyone, does not consider each user’s preferences individually.

 Nor can it be updated individually with new data collected over time.

As a result, most models are relatively stable and only occasionally updated.

Solving these problems requires changing the training process of models from cloud to personal smartphones; Due to the difference in the performance of these two platforms, we will see exciting events after changing the training environment.

Doing so, for example, enables the keyboard application to adjust its predictions to suit your typing style.

One can expect even more from it; Suppose that a learning machine keyboard can give you suggested words while chatting based on your relationships with other people.

Google Gboard currently uses in-app, cloud-based training in combination to improve the quality of forecasts for all users; But this combination method has its limitations.

For example, Gboard predicts your next possible word based on personal habits and past conversations and cannot do so for the entire sentence.

This type of personal training should be done entirely on the device; Because the consequences of sending user-sensitive information to the cloud are catastrophic.

Introducing CoreML 3 in 2019, Apple confirmed that it would allow developers to train existing models with new data for the first time. Of course, even in these circumstances, the models first had to be trained with strong hardware and then left to the developers.

In Android, such an educational method can be best seen in the Adaptive brightness feature.

Google has been using machine learning since Android Pie to “view user interactions with the screen brightness slider” to retrain the device to produce a model that fits each person’s preferences.

Google claims that it has seen a significant improvement in Android’s ability to predict screen brightness and in just one week of normal user interaction with a smartphone by enabling this feature. This feature can be handy for people who want their device screen to be compatible with the environment.

You may be wondering why machine learning is limited to just a few sections?

The answer is obvious, and there are not many training techniques or algorithms designed for use in smartphones.

This unpleasant fact will not change overnight, But there are several reasons why we can be optimistic about cell phone learning in the next decade.

As tech giants and developers both focus on improving the experience and privacy, this process will continue in exciting new ways. Maybe then we can finally consider our phones completely “smart.”

Die mobile Version verlassen