Glove for remote gesture control

Glove for remote gesture control

HCI (Human Computer Interaction) is an evolving topic these days that is finding its way into the life of many people, with a large application potential in the consumer, health, and assistive technology industries.

Free-form interfaces (for example, voice and gestures) have become ubiquitous and the cost of inclusion is getting lower and lower. One type of interface that may come handy in the bleeding-edge field of AR/VR, or in the health industry for low-mobility people, is one that uses finger and hand movement to interact with a device. Essentially: a smart glove. A smart glove is one that is able to react to the movements of the fingers, by recognizing either their fixed position or moving patterns.

Key issues

To detect the flexing of the fingers, we need a flex sensor. This sensor produces an output proportional to the bending applied to it. Commercially-available flex sensors costs ~15 USD/each. To cover all our fingers the total cost would amount to 150 USD. It is not a prohibitive cost, but we'd be grateful if we could somehow reduce it by a large amount.

DIY Solution

There are few materials that can be used to build a flex sensor by yourself. In this project, I selected Velostat because it is widely available and pretty cheap. An 11"x11" sheet costs $5 USD on Adafruit. It is a pressure-sensitive material that can react well to bending too.

Follow this YouTube video for a step by step tutorial on how to build a single finger sensor: https://www.youtube.com/watch?v=FEPgLbPv6NM

You will then sew the bends to the glove to keep them in place. You are free to perform this part as you prefer: if sewing is not your thing, you could also use hot glue or adhesive tape.

Machine Learning

Once you have your glove wired to a microcontroller board (I used an Arduino Nano BLE Sense), you have to collect data for the gestures you want to recognize and train a machine learning model to detect those gestures. I trained a Random Forest classifier in scikit-learn and ported it to C++ thanks to my micromlgen library, but you can use online visual platforms if you don't want to code from scratch.

The end result is the video in the cover of this page: me performing 3 gestures (close, one and tap) that are picked up by the model and printed on the screen.