DED9

Researchers’ Strategy For Equipping IOT With Optimal Neural Networks

Most Devices On The Internet Of Things (IoT) Are Equipped With Microcontrollers. But, unfortunately, microcontrollers Have Limited Processing And Storage Resources, So It Is Very Difficult To Implement Deep Learning Methods On Them, And Many Subtleties Must Be Observed In This Regard.

Researchers at Emmitt University have developed a solution that allows IoT-connected devices to be equipped with optimal neural networks. This is an important achievement in the field of IoT, and in addition to increasing data security, it will also help reduce air pollution.

IoT-based devices are equipped with microcontrollers with low processing power, and their storage capacity is minimal compared to devices such as smartphones.

Accordingly, such devices do not have the ability to run machine learning algorithms independently. Instead, they send the collected data to the cloud for complex processing and analysis. Sending data to the cloud compromises the data and opens the way for attackers who intend to attack such networks.

One solution to such a challenge is to design systems that can use neural networks without clouds. However, this is a relatively new research topic, and companies like Google and ARM are looking for a solution.

Of course, designing a deep neural network for microcontrollers is not an easy task.

Although conventional methods allow you to select the best neural network from a large number of neural networks, these methods are often suitable for implementation on GPUs or smartphones and work on weak and tiny devices such as microcontrollers. It is not simple.

The use of microcontrollers in the structure of IoT devices is widespread

EMIT researchers have developed a solution called MCUNet for this challenge. This solution has two parts. The first part is an interpretation engine called TinyEngine, responsible for managing resources and operating systems. TinyEngine is optimized to execute a specific neural network structure.

The other part of the solution is a neural architecture search algorithm called TinyNAS. Based on the microcontroller, this algorithm finds the most optimal neural structure and provides it to TinyEngine.

TinyNAS allows us to select the best neural network for a particular microcontroller without the need for additional components. However, the microcontroller also needs a compact interpretation engine to run this small neural network. Interpreting engines usually have unused parts rarely used (these unused parts are known as “dead weights”).

Having this extra code in the neural network structure is not a problem for laptops or smartphones, but these extra parts can easily waste a microcontroller’s hardware resources. So instead, TinyEngine generates the code needed to execute the neural network chosen by TinyNAS.

Tests have shown that the final compiled code is 1.9 to 5 times lighter than similar models offered by Google or ARM.

Implemented on a commercial microcontroller, the MCUNet was about 70% successful in detecting images, significantly improving a similar sample. It should be noted that in such tests, even a one percent improvement in performance is significant. Thus, such a performance improvement is very significant for the implementation of neural networks on microcontrollers.

 

Enabling neural networks to run on the device, in addition to eliminating the need for cloud computing and increasing data security, will enable the use of the Internet of Things based on artificial intelligence in remote and remote areas with limited Internet connections.

Another advantage that these researchers point out is that the air will be less polluted in their method because the network training is done with much less power consumption.

The researchers say their ultimate goal is to achieve compact, cost-effective artificial intelligence that requires fewer computational and data resources and uses fewer human resources.

Die mobile Version verlassen