echosonic technology 

The technology backing echosonic exploits the transient effects within the sensing mechanisms of microphone or other sensors to directly extract features of time series signals for machine learning. As such, computing power hungry machine learning training and inference could be operated purely on the smart audio device (i.e., Google Home, Apple Homepod, Amazon Alexa, etc.) locally using very limited samples without the need cloud-based data center and high performance computers. The computing system will be more securer and more human centric.

Reference: Shougat, M. R. E. U., Li, X., Mollik, T., & Perkins, E. (2021). A Hopf physical reservoir computer. Scientific Reports, 11(1), 1-13.

Shown in the figure on the right side, audio samples from different classes outputs from echosonic devices are a unique 2D pattern which could be recognized by a very simple digital readout without any preprocessing. The exact digital readout could also be used for computer vision and other types of machine learning tasks. These exciting features allow the possibility of multi-modal machine learning on edge devices. 

image_6483441 (1).JPG
original.tif
Original audio signal
reservoir output.tif
echosonic outputs
recovered.tif
Recovered signal

echosonic developed a technology that could reconstruct and enhance the audio signal using the original outputs from echosonic microphone devices. This technology allows echosonic to be an ubiquitous solution for edge audio intelligence devices while keeping superior audio recording qualities.

Performance comparison

Compared to state-of-the-art solution like Synthiant, echosonic produces >99% (97% of that for Synthiant) of accuracy on three class wake words recognition, while draws <15% power consumption. More important, we only need to use <10% of data to reconfigure the echosonic devices, enables reconfiguring devices directly on the edge.