Putting Deep Learning, Low-Res Images to Work

The recent TinyML EMEA 2022 Innovation Forum highlighted, among other themes, AI processing of low-resolution video and images. While deep learning techniques of low-level image restoration are on everyone’s minds, there are many more practical tasks where low resolution is a benefit.

A very important advantage of low image/video resolution is that it favors privacy concerns related to potential improper use of human face recognition.

Motion detection, people counting, eye gaze detection, human activity recognition, and navigation techniques for various robotics are a few of the applications based on low image/video resolution.

Eye tracking and gaze information, for example, are used in several critical applications, allowing detection of driver fatigue or diagnosis of neurological and ophthalmological diseases and cognitive disorders.  

For foveated rendering in AR/VR/MR headsets, another gaze tracking app, IR illumination of the eye and an IR camera located on the glasses close to the eye are often used, allowing the image resolution to be as low as 16×16 pixels.

Some deep neural networks enable detection of eye gaze based on images of 50×50, 100×100 and 200×200 pixels. Placing such neural networks on affordable low-power chips with on-device processing capability would create new product opportunities in this field.

Low resolution is enough for object tracking in tiny devices with compute and power constraints, such as Unmanned Aerial Vehicles (UAV). Tiny and lightweight UAVs could be very useful in numerous areas, especially as their small size and weight make them safe for flying around humans. As is customary in AI, inspiration for tiny UAVs derives from nature. But to fly autonomously, such devices need to apply neuromorphic sensing and computing. In the case of drone vision systems such sensing is based on event cameras, and near-sensor neuromorphic processing can be provided by platforms like POLYN’s Neuromorphic Analog Signal Processors with only 100 uWatt power consumption.

Reliable accuracy for low resolution image processing can be achieved by several deep learning methods, for example through two-stream convolutional neural network models and use of embeddings.

Here POLYN’s NASP is an ideal fit, with its ability to convert neural networks into mass-market chips. NASP ultra-low power consumption and low latency are suitable for real-time video/image processing. POLYN’s Tiny AI chips make a perfect solution to be used with low resolution cameras, aiding in motion tracking and positional awareness in affordable devices.