FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist

FingerTrak is a minimal-obtrusive wristband that enables continuous 3D finger tracking and hand pose estimation with four miniature thermal cameras mounted closely on a form-fitting wristband. FingerTrak explores the feasibility of continuously reconstructing the entire hand postures (20 finger joints positions) without the need of seeing all fingers. We demonstrate that our system is able to estimate the entire hand posture by observing only the outline of the hand, i.e., hand silhouettes from the wrist using low-resolution (32 × 24) thermal cameras. A customized deep neural network is developed to learn to ”stitch” these multi-view images and estimate 20 joints positions in 3D space. Our user study with 11 participants shows that the system can achieve an average angular error of 6.46◦ when tested under the same background, and 8.06◦ when tested under a different background. FingerTrak also shows encouraging results with the re-mounting of the device and has the potential to reconstruct some of the complicated poses.

Published on IMWUT/Ubicomp’20. [Paper] [Video]

 

 

WristWash: Towards Automatic Handwashing Assessment Using a Wrist-worn Device

Washing hands is one of the easiest yet most effective ways to prevent spreading illnesses and diseases. However, not adhering to thorough handwashing routines is a substantial problem worldwide. For example, in hospital operations lack of hygiene leads to healthcare associated infections. We present WristWash, a wrist-worn sensing platform that integrates an inertial measurement unit and a Hidden Markov Model-based analysis method that enables automated assessments of handwashing routines according to recommendations provided by the World Health Organization (WHO). We evaluated WristWash in a case study with 12 participants. WristWash can successfully recognize the 13 steps of the WHO handwashing procedure with an average accuracy of 92% with user-dependent models, and with 85% for user-independent modeling. We further explored the system’s robustness by conducting another case study with six participants, this time in an unconstrained environment, to test variations in the handwashing routine and to show the potential for real-world deployments.

 

Published on ISWC’18  [ Paper ]

FingerPing: Recognizing fine-grained hand poses using active acoustic on-body sensing

FingerPing is a novel sensing technique that can recognize various fine-grained hand poses by analyzing acoustic resonance features. A surface-transducer mounted on a thumb ring injects acoustic chirps (20Hz to 6,000Hz) to the body. Four receivers distributed on the wrist and thumb collect the chirps. Different hand poses of the hand create distinct paths for the acoustic chirps to travel, creating unique frequency responses at the four receivers. We demonstrate how FingerSonar can differentiate up to 22 hand poses, including the thumb touching each of the 12 phalanges on the hand as well as 10 American sign language poses. A user study with 16 participants showed that our system can recognize these two sets of poses with an accuracy of 93.77\% and 95.64\%, respectively. We discuss the opportunities and remaining challenges for the widespread use of this input technique.

 CHI’18. [Paper

FingerSound: Recognizing unistroke thumb gestures using a ring

FingerSound Gesture Samples

We introduce FingerSound, an input technology to recognize unistroke thumb gestures, which are easy to learn and can be performed through eyes-free interaction. The gestures are performed using a thumb-mounted ring comprising a contact microphone and a gyroscope sensor. A K-Nearest-Neighbor(KNN) model with a distance function of Dynamic Time Warping (DTW) is built to recognize up to 42 common unistroke gestures. A user study, where the real-time classification results were given, shows an accuracy of 92%-98% by a machine learning model built with only 3 training samples per gesture. Based on the user study results, we further discuss the opportunities, challenges and practical limitations of FingerSound when deploying it to real-world applications in the future.

Published on Proc. ACM Interact. Mob. Wearable Ubiquitous Technology(IMWUT). Issue 3 /Ubicomp’17 .[Paper][Video]

FingOrbits: Interaction with Wearables Using Synchronized Thumb Movements

FingOrbits System Overview

We present FingOrbits, a wearable interaction technique using synchronized thumb movements. A thumb-mounted ring with an inertial measurement unit and a contact microphone are used to capture thumb movements when rubbing against the other fingers. Spectral information of the movements are extracted and fed into a classification backend that facilitates gesture discrimination. FingOrbits enables up to 12 different gestures through detecting three rates of movement against each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), we demonstrate that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.

Published on 2017 international symposium on wearable computers/ISWC’17. [Paper][Video]

SoundTrak: Continuous 3D tracking of a finger using active acoustics

SoundTrak System Overview

The small size of wearable devices limits the efficiency and scope of possible user interactions, as inputs are typically constrained to two dimensions: the touchscreen surface. We present SoundTrak, an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space by continuously tracking the finger position with high resolution. The user wears a ring with an embedded miniature speaker sending an acoustic signal at a specific frequency (e.g., 11 kHz), which is captured by an array of miniature, inexpensive microphones on the target wearable device. A novel algorithm is designed to localize the finger’s position in 3D space by extracting phase information from the received acoustic signals.  We evaluated SoundTrak in a volume of space (20cm x 16cm x 11cm) around a smartwatch, and show an average accuracy of 1.3 cm. We report on results from a Fitts’ Law experiment with 10 participants as the evaluation of the real-time prototype. We also present a set of applications which are supported by this 3D input technique, and show the practical challenges that need to be addressed before widespread use.

Proc. ACM Interact. Mob. Wearable UbiquitousTechnol. 1, 2, Article 30 (June 2017) / Ubicomp 2017. [Paper] [Video]

Understanding the Cost of Driving Trips

Driving is the second highest expense for the average American household. Yet few people know the total cost of owning and operating their vehicles, and most cannot estimate accurately how much a common driving trip (like a daily commute) costs. There are an increasing number of viable alternatives for personal transportation, such as car services (e.g. Uber, Lyft), in addition to ridesharing, transit, biking, and walking. Cost is one factor in transportation mode choice, and awareness of the cost of driving is useful in making better informed decisions. To bridge this awareness gap, we built and deployed a system that makes the total cost of each driving trip (including depreciation, maintenance, insurance, and fuel) visible to the user. After this intervention, participants were able to more accurately and confidently estimate costs of their driving commutes, and transfer this knowledge to other trips for which they had not seen a cost.

Published In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 430-434. [Paper]

Bioacoustics-based human body mediated communication

 

Transmit text between two persons via Morse code

We present an acoustics-based method that utilizes the human body as a communication channel to propagate information across different devices.Through a set of experiments with eight participants, we demonstrate that acoustic signals under 20kHz can be propagated within or between human bodies or even between the human body and the environment. We can detect the existence of touch contact by simply matching the frequency response curves of the received signals; with this approach we achieved an accuracy of 100% in detecting the presence of contact. These capabilities enable new opportunities for providing more natural human-computer interaction experiences and secure personal area networks. We built a system to transmit text with frequency shift keying (FSK) based modulation through the body by using our technology and discuss the potential opportunities and challenges for various potential applications.

Published on IEEE Computer 50, no. 2 (2017): 36-46. Feb. 2017. [Video][Paper]

TapSkin: Recognizing On-Skin Input for Smartwatches

gesturehandThe touchscreen has been the dominant input surface for smartphones and smartwatches. However, its small size compared to a phone limits the richness of the input gestures that can be supported. We present TapSkin, an interaction technique that recognizes up to 11 distinct tap gestures on the skin around the watch using only the inertial sensors and microphone on a commodity smartwatch. An evaluation with 12 participants shows our system can provide classification accuracies from 90.69% to 97.32% in three gesture families – number pad, d-pad, and corner taps. We discuss the opportunities and remaining challenges for widespread use of this technique to increase input richness on a smartwatch without requiring further on-body instrumentation.
This paper was published on 2016 ACM International Conference on Interactive Surfaces and Spaces. Acceptance Rate 33/119 27.7% [Paper] [Video]

WatchOut: Extending Interactions on a Smartwatch with Inertial Sensing

watchoutsystemCurrent interactions on a smartwatch are generally limited to a tiny touchscreen, physical buttons or knobs, and speech. We present WatchOut, a suite of interaction techniques that includes three families of tap and swipe gestures which extend input modalities to the watch’s case, bezel, and band. We describe the implementation of a user-independent ges-ture recognition pipeline based on data from the watch’s embedded inertial sensors. In a study with 12 participants using both a round- and square-screen watch, the average gesture classification accuracies ranged from 88.7% to 99.4%. We demonstrate applications of this richer interaction capability,  and discuss the strengths, limitations, and future potential for this work.
and discuss the strengths, limitations, and future potential for this work.
This paper was published on the 20th International  Symposium on Wearable Computers (ISWC’16)  18 papers are accepted as full paper out of 132 submissions. [Paper][Video]

Driver Classification Based on Driving Behaviors

In this project, we develop a model capable of classifying drivers from their driving behaviors sensed by low level sensors. The sensing platform consists of data available from the diagnostic outlet of the car and smartphone sensors. We are interested in arbitrary real world driving behavior such as turning, stop to start, start to stop and braking. We develop a window based support vector machine model to classify drivers. We test our model with two datasets collected under different conditions. Furthermore, we evaluate the model using each sensor source (car and phone) independently and combining both the sensors. The average classification accuracies attained with data collected from three different cars shared between couples in a naturalistic environment were 75.83%, 85.83% and 86.67% using only phone sensors, only cars sensors and combined car and phone sensors respectively.

This paper is published at IUI’16 Proceedings of the 21th International Conference on Intelligent User Interfaces .(Acceptance Rate: 24%) [Paper]

Beyond the Touchscreen: An Exploration of Extending Interactions on Commodity Smartphones

gesturesTiiSThis paper is an extended version of BeyondTouch(IUI’15 )[Paper].

Most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone); however, the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional inputs on the case and the surface adjacent to the smartphone. Although most of these interactions are implemented with machine learning methods, compact and robust rule-based detection methods can also be applied for recognizing some interactions by analyzing physical characteristics of tapping events on the phone. This article is an extended version of Zhang et al. [2015], which solely covered gestures implemented by machine learning methods. We extended our previous work by adding gestures implemented with rule-based methods, which works well with different users across devices without collecting any training data. We outline the implementation of both machine learning and rule-based methods for these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios and compare the two different implementation methods.

This paper is accepted to TiiS  ACM Transactions on Interactive Intelligent Systems.[Paper]

BeyondTouch: Extending the Input Language with Built-in Sensors on Commodity Smartphones

356.pic_hdWhile most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone), the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone. We outline the implementation of these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios.

This work was published on the Doctoral School of Ubicomp’13 [Paper] and  IUI ’15 Proceedings of the 20th International Conference on Intelligent User Interfaces(Acceptance Rate 23%). [Paper] [Video] .

Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

This paper was published on IUI’15 Proceedings of the 20th International Conference on Intelligent User Interfaces and received BEST SHORT PAPER AWARD(1%). [Paper]

Instant Inkjet Circuits: Lab-based Inkjet Printing to Support Rapid Prototyping of UbiComp Devices

This project introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of ‘instant inkjet circuits’ is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built.

This work was published on UbiComp ’13 Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing and received the BEST PAPER AWARD(1%). [Paper]

BackTap: Robust Four-Point Tapping on the Back of an Off-the-shelf Smartphone

BackTap is an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user’s pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.

This work was published on UIST ’13 Adjunct Proceedings of the adjunct publication of the 26th annual ACM symposium on User interface software and technology[Paper][Video].

PinkyMenu: Gesture with Pinky Finger Based Sketching Application for Tablet Computers

PinkyMenu explored using pinky finger to assist pen-based interaction on tablet.  A two-level menu was built in a painting application was built to demonstrate the usage of the novel gestures. A user can use the pinky finger to slide horizontally or vertically to control the two-level menu.  The major contribution of this tool is to explore the use of the dominant hand assisting with pen interaction to create a new way of developing menus and navigation systems without the use of a traditional menu and the non-dominant hand.

This project is a class project in CS6456 (Principles of User Interface Software) taught by Keith Edwards in Georgia Tech.[Report] [Video]

CoolMag: A Tangible Interaction Tool to Customize Instruments for Children in Music Education

CoolMag, a tangible interaction tool to enable children to create different instruments collaboratively in music education. With CoolMag, children could learn the basic playing methods of different instruments. It also has the potential to inspire children s creativity, because children could adopt objects in daily life (broom, cup, pen etc.) as the carrier of their novel instruments whose appearance may differ from the traditional one.

Published at UbiComp ’11 Proceedings of the 13th international conference on Ubiquitous computing [Paper]

T-Maze: A Tangible Programming Tool for Children

T-Maze is  a tangible programming tool for children aged 5 to 9. Children could use T-Maze to create their own maze maps and complete some maze escaping tasks by the tangible programming blocks and sensors. T-Maze uses a camera to, in real-time, catch the programming sequence of the wooden blocks’ arrangement, which will be used to analyze the semantic correctness and enable the children to receive feedbacks immediately. And children could join in the game by controlling the sensors during program’s running. A user study shows that T-Maze is an interesting programming approach for children and easy to learn and use.

Published at IDC’2011 (Acceptance Rate 30%). This paper was the first full paper from China published on the ACM SIGCHI Interaction Design and Children. Cheng is the only student author.  [Paper]