The Wearable Radar: Sensing Gestures Through Fabrics
Recently, millimeter-wave radar-on-chip sensors such as Google Soli have become readily available in the mobile ecosystem. We envision radar technology to be integrated in wearables to enable gesture-based interaction possibilities for users “on the go”, e.g. to control various devices such as phone, car infotainment system, etc. even when the sensor is occluded by some material. Towards achieving this vision, we conducted a systematic study on mid-air gesture recognition through three different fabrics. We developed a hybrid CNN+LSTM deep learning model and investigated gesture recognition performance when the radar sensor is covered by each fabric material. We show that, when trained on no occluding material, the model performed worse than if trained with the same test material; however this is only valid in the small data regime (N=20). When trained with large samples (N=200) on no occluding material, the model achieved remarkable performance on any of the fabrics (95\% avg. accuracy, 99\% AUC). Our results show that sensing mid-air gestures through fabrics is both feasible and ready for practical applications, since it is not necessary to train a dedicated model for each type of fabric in the market. We also contribute a repeatable procedure to systematically test mid-air gestures with radar technology, enabled by an experimental platform that we release with this paper.
The Missing Interface: Micro-gestures on Augmented Objects
There is a missing interface when we augment physical objects with digital content, since those objects were never designed for such augmentation. A review of related work in this regard reveals that current interaction approaches have limited detection fidelity and spatial resolution, and do not provide inconspicuous, precise, and flexible object-oriented interactivity. Our proposal, based on Google Soli’s radar sensing technology, is designed to detect micro-gestures on objects with sub-millimeter precision. Our preliminary results show that Soli’s core features and traditional machine learning methods (Random Forest and Support Vector Machine) do not lead to robust recognition, and so more advanced methods incorporating additional sensor features should be used instead.