FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist

FingerTrak is a minimal-obtrusive wristband that enables continuous 3D finger tracking and hand pose estimation with four miniature thermal cameras mounted closely on a form-fitting wristband. FingerTrak explores the feasibility of continuously reconstructing the entire hand postures (20 finger joints positions) without the need of seeing all fingers. We demonstrate that our system is able to estimate the entire hand posture by observing only the outline of the hand, i.e., hand silhouettes from the wrist using low-resolution (32 × 24) thermal cameras. A customized deep neural network is developed to learn to ”stitch” these multi-view images and estimate 20 joints positions in 3D space. Our user study with 11 participants shows that the system can achieve an average angular error of 6.46◦ when tested under the same background, and 8.06◦ when tested under a different background. FingerTrak also shows encouraging results with the re-mounting of the device and has the potential to reconstruct some of the complicated poses.

Published on IMWUT/Ubicomp’20. [Paper] [Video]

 

 

WristWash: Towards Automatic Handwashing Assessment Using a Wrist-worn Device

Washing hands is one of the easiest yet most effective ways to prevent spreading illnesses and diseases. However, not adhering to thorough handwashing routines is a substantial problem worldwide. For example, in hospital operations lack of hygiene leads to healthcare associated infections. We present WristWash, a wrist-worn sensing platform that integrates an inertial measurement unit and a Hidden Markov Model-based analysis method that enables automated assessments of handwashing routines according to recommendations provided by the World Health Organization (WHO). We evaluated WristWash in a case study with 12 participants. WristWash can successfully recognize the 13 steps of the WHO handwashing procedure with an average accuracy of 92% with user-dependent models, and with 85% for user-independent modeling. We further explored the system’s robustness by conducting another case study with six participants, this time in an unconstrained environment, to test variations in the handwashing routine and to show the potential for real-world deployments.

 

Published on ISWC’18  [ Paper ]

FingerPing: Recognizing fine-grained hand poses using active acoustic on-body sensing

FingerPing is a novel sensing technique that can recognize various fine-grained hand poses by analyzing acoustic resonance features. A surface-transducer mounted on a thumb ring injects acoustic chirps (20Hz to 6,000Hz) to the body. Four receivers distributed on the wrist and thumb collect the chirps. Different hand poses of the hand create distinct paths for the acoustic chirps to travel, creating unique frequency responses at the four receivers. We demonstrate how FingerSonar can differentiate up to 22 hand poses, including the thumb touching each of the 12 phalanges on the hand as well as 10 American sign language poses. A user study with 16 participants showed that our system can recognize these two sets of poses with an accuracy of 93.77\% and 95.64\%, respectively. We discuss the opportunities and remaining challenges for the widespread use of this input technique.

 CHI’18. [Paper

FingerSound: Recognizing unistroke thumb gestures using a ring

FingerSound Gesture Samples

We introduce FingerSound, an input technology to recognize unistroke thumb gestures, which are easy to learn and can be performed through eyes-free interaction. The gestures are performed using a thumb-mounted ring comprising a contact microphone and a gyroscope sensor. A K-Nearest-Neighbor(KNN) model with a distance function of Dynamic Time Warping (DTW) is built to recognize up to 42 common unistroke gestures. A user study, where the real-time classification results were given, shows an accuracy of 92%-98% by a machine learning model built with only 3 training samples per gesture. Based on the user study results, we further discuss the opportunities, challenges and practical limitations of FingerSound when deploying it to real-world applications in the future.

Published on Proc. ACM Interact. Mob. Wearable Ubiquitous Technology(IMWUT). Issue 3 /Ubicomp’17 .[Paper][Video]

FingOrbits: Interaction with Wearables Using Synchronized Thumb Movements

FingOrbits System Overview

We present FingOrbits, a wearable interaction technique using synchronized thumb movements. A thumb-mounted ring with an inertial measurement unit and a contact microphone are used to capture thumb movements when rubbing against the other fingers. Spectral information of the movements are extracted and fed into a classification backend that facilitates gesture discrimination. FingOrbits enables up to 12 different gestures through detecting three rates of movement against each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), we demonstrate that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.

Published on 2017 international symposium on wearable computers/ISWC’17. [Paper][Video]

SoundTrak: Continuous 3D tracking of a finger using active acoustics

SoundTrak System Overview

The small size of wearable devices limits the efficiency and scope of possible user interactions, as inputs are typically constrained to two dimensions: the touchscreen surface. We present SoundTrak, an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space by continuously tracking the finger position with high resolution. The user wears a ring with an embedded miniature speaker sending an acoustic signal at a specific frequency (e.g., 11 kHz), which is captured by an array of miniature, inexpensive microphones on the target wearable device. A novel algorithm is designed to localize the finger’s position in 3D space by extracting phase information from the received acoustic signals.  We evaluated SoundTrak in a volume of space (20cm x 16cm x 11cm) around a smartwatch, and show an average accuracy of 1.3 cm. We report on results from a Fitts’ Law experiment with 10 participants as the evaluation of the real-time prototype. We also present a set of applications which are supported by this 3D input technique, and show the practical challenges that need to be addressed before widespread use.

Proc. ACM Interact. Mob. Wearable UbiquitousTechnol. 1, 2, Article 30 (June 2017) / Ubicomp 2017. [Paper] [Video]

Understanding the Cost of Driving Trips

Driving is the second highest expense for the average American household. Yet few people know the total cost of owning and operating their vehicles, and most cannot estimate accurately how much a common driving trip (like a daily commute) costs. There are an increasing number of viable alternatives for personal transportation, such as car services (e.g. Uber, Lyft), in addition to ridesharing, transit, biking, and walking. Cost is one factor in transportation mode choice, and awareness of the cost of driving is useful in making better informed decisions. To bridge this awareness gap, we built and deployed a system that makes the total cost of each driving trip (including depreciation, maintenance, insurance, and fuel) visible to the user. After this intervention, participants were able to more accurately and confidently estimate costs of their driving commutes, and transfer this knowledge to other trips for which they had not seen a cost.

Published In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 430-434. [Paper]

Bioacoustics-based human body mediated communication

 

Transmit text between two persons via Morse code

We present an acoustics-based method that utilizes the human body as a communication channel to propagate information across different devices.Through a set of experiments with eight participants, we demonstrate that acoustic signals under 20kHz can be propagated within or between human bodies or even between the human body and the environment. We can detect the existence of touch contact by simply matching the frequency response curves of the received signals; with this approach we achieved an accuracy of 100% in detecting the presence of contact. These capabilities enable new opportunities for providing more natural human-computer interaction experiences and secure personal area networks. We built a system to transmit text with frequency shift keying (FSK) based modulation through the body by using our technology and discuss the potential opportunities and challenges for various potential applications.

Published on IEEE Computer 50, no. 2 (2017): 36-46. Feb. 2017. [Video][Paper]

TapSkin: Recognizing On-Skin Input for Smartwatches

gesturehandThe touchscreen has been the dominant input surface for smartphones and smartwatches. However, its small size compared to a phone limits the richness of the input gestures that can be supported. We present TapSkin, an interaction technique that recognizes up to 11 distinct tap gestures on the skin around the watch using only the inertial sensors and microphone on a commodity smartwatch. An evaluation with 12 participants shows our system can provide classification accuracies from 90.69% to 97.32% in three gesture families – number pad, d-pad, and corner taps. We discuss the opportunities and remaining challenges for widespread use of this technique to increase input richness on a smartwatch without requiring further on-body instrumentation.
This paper was published on 2016 ACM International Conference on Interactive Surfaces and Spaces. Acceptance Rate 33/119 27.7% [Paper] [Video]

WatchOut: Extending Interactions on a Smartwatch with Inertial Sensing

watchoutsystemCurrent interactions on a smartwatch are generally limited to a tiny touchscreen, physical buttons or knobs, and speech. We present WatchOut, a suite of interaction techniques that includes three families of tap and swipe gestures which extend input modalities to the watch’s case, bezel, and band. We describe the implementation of a user-independent ges-ture recognition pipeline based on data from the watch’s embedded inertial sensors. In a study with 12 participants using both a round- and square-screen watch, the average gesture classification accuracies ranged from 88.7% to 99.4%. We demonstrate applications of this richer interaction capability,  and discuss the strengths, limitations, and future potential for this work.
and discuss the strengths, limitations, and future potential for this work.
This paper was published on the 20th International  Symposium on Wearable Computers (ISWC’16)  18 papers are accepted as full paper out of 132 submissions. [Paper][Video]