Scroll Top

Technology

Neuromorphic Analog Signal Processing (NASP)

Ideal for New Generation of Sensor Level Devices in Real Time Edge AI Applications
Why Is It Neuromorphic?

Neuromorphic computing is a method of computer engineering where the system elements mimic the human brain and nervous system, in hardware and software. Neuromorphic systems provide both fast computation and low power consumption, handling many operations simultaneously.   Among other things, it means they are fault-tolerant and capable to produce results after some elements have failed.

A neuromorphic chip imitates the brain through elements that implement “neurons”, nodes that process information, and “axons”, connections with weights between the nodes that transfer electrical signals using analog circuitry. An electric signal modulation mimics the brain signal variation.

Why Is It Analog?

According to the Decadal Plan for Semiconductors by Semiconductor Research Corporation, the first among major shifts that will define the future of semiconductors and ICT, will be the analog data deluge: Fundamental breakthroughs in analog hardware are required to generate smarter world-machine interfaces that can sense, perceive, and reason.

In our neuromorphic analog solution, neurons are physically implemented as an analog circuitry elements according to the mathematical simulation of a single neuron.

Neuromorphic Analog implementation
Why Is It Tiny AI?

Tiny AI is a novel model of ML, one of the newest major technological breakthroughs in AI. It is also known as Tiny ML.

Tiny AI implies smarter data usage through various techniques, such as embeddings. As a result, AI computations can be performed directly on the device and do not require users to send data to the cloud or a remote server.

The NASP chips are true Tiny AI implementations that improve latency and power consumption, and enable inference computations directly on devices like wearables, IoT sensors and more, increasing their functionality but also improving users’ privacy as the data stays on the device.

NEUROMORPHIC FRONT-END: ExTRACTING REAL VALUE FROM SENSOR DATA

White Paper

NEUROMORPHIC ANALOG IMPLEMENTATION FOR TINY AI APPLICATIONS

White Paper

NO FEAR of DATA TSUNAMI

Use of Embeddings

Embeddings are representations that a trained autoencoder neural network forms in its deep hidden layers. Embeddings contain densely packed most significant information about sensory input data.

Embeddings are used as input data for further processing, classification, and interpretation.

Use of embeddings for sensor data preprocessing reduces the noisy raw data flow by 1000 times, making it very instrumental for industrial IoT applications.

Embeddings

SENSOR DATA FLOW REDUCED BY

0
TIMES

NASP technology is ideal for real time Edge sensor signal processing (ESSP) appliances, providing small size,  ultra-low power consumption and low latency.

NASP Convertor works with any standard Neural Net framework like Keras, Tensor Flow or others.

NASP receives any type of signal and processes raw sensor data using neuromorphic AI computations on sensor level without sending it to the cloud. 

1Analog /digital signal input

2POLYN’s neuromorphic architecture processes input signals in a true parallel asynchronous mode providing unprecedented low latency and low power consumption. Calculations do not require CPU usage or memory access.

3NASP can use a pre-trained artificial neural network from any major ML framework (such as Tensor Flow, PyTorch, MXNet etc.) for the neuromorphic representation resulting in exceptional precision and accuracy.

NASP HYBRID SOLUTION

In NASP hybrid concept, the analog neuromorphic chip with fixed neural networks is responsible for pattern detection, based on embeddings and combined with a flexible algorithm, which could be any type, including an additional flexible neural network, responsible for the pattern interpretation and even retraining.

There is a well-known phenomenon of Machine Learning: after several hundred training cycles, also known as epochs, the deep convolutional neural network maintains fixed weights and structure of the first 80-90% of the layers, and in the following cycles, only the few last layers responsible for classification continue to change weights. This fact is used in transfer learning, which is a key to the hybrid NASP concept.

FROM a NEURAL NET TO the CHIP

NASP platform synthesizes a true neuromorphic Tiny AI chip layout from any trained neural network

Neural Net Design

Select a trained Neural Net or train the customer’s Neural Net

Math Model Simulation

Generated with NAPS Compiler:  D-MVP – Neural Network Software Simulation

NASP Chip Synthesis

The Math Model converted into the chip layout ready for production

NASP logo

NASP Chip Production

Semiconductor fabs produce NASP chips with standard equipment and processes

Development Process

PROTOTYPE NN MODEL
NEURAL NET TRAINING AND CONVERSION
  • NN is provided by the customer or POLYN assists the customer in NN selection
  • Fully functionable math model of NN
  • Data collection and training process 
  • The customer accepts the final functionality of the trained NN
SYNTHESIS
NETLIST for the target CAD
  • Convert the trained NN to Netlist for CAD and generate the neurons library
  • Build CAD model of NASP block
  • Verification of the NASP CAD model and the NN model conformity 
IMPLEMENTATION
THE CHIP PRODUCTION
  • Generate the final layout and GDSII format for a target Node and Fab