The technology backing echosonic exploits the transient effects within the sensing mechanisms of microphone or other sensors to directly extract features of time series signals for machine learning. As such, computing power hungry machine learning training and inference could be operated purely on the smart audio device (i.e., Google Home, Apple Homepod, Amazon Alexa, etc.) locally using very limited samples without the need cloud-based data center and high performance computers. The computing system will be more securer and more human centric.
Reference: Shougat, M. R. E. U., Li, X., Mollik, T., & Perkins, E. (2021). A Hopf physical reservoir computer. Scientific Reports, 11(1), 1-13.
Shougat, M. R. E. U., Li, X., Shao, S., McGarvey, K., & Perkins, E. (2022). Hopf oscillation-based reservoir computer for reconfigurable sound recognition. arXiv:2212.10370
Shown in the figure, audio samples from different classes outputs from echosonic devices are a unique 2D pattern which could be recognized by a very simple digital readout without any preprocessing. The exact digital readout could also be used for computer vision and other types of machine learning tasks. These exciting features allow the possibility of multi-modal machine learning on edge devices.
Original audio signal
echosonic developed a technology that could reconstruct and enhance the audio signal using the original outputs from echosonic microphone devices. This technology allows echosonic to be an ubiquitous solution for edge audio intelligence devices while keeping superior audio recording qualities.
Compared to state-of-the-art solutions, echosonic produces >99% (97% for other solutions) of accuracy on three class wake words recognition, while draws <15% power consumption. More important, we only need to use <10% of data to reconfigure the echosonic devices, enables reconfiguring devices directly on the edge.