Driver Classification Based on Driving Behaviors

In this project, we develop a model capable of classifying drivers from their driving behaviors sensed by low level sensors. The sensing platform consists of data available from the diagnostic outlet of the car and smartphone sensors. We are interested in arbitrary real world driving behavior such as turning, stop to start, start to stop and braking. We develop a window based support vector machine model to classify drivers. We test our model with two datasets collected under different conditions. Furthermore, we evaluate the model using each sensor source (car and phone) independently and combining both the sensors. The average classification accuracies attained with data collected from three different cars shared between couples in a naturalistic environment were 75.83%, 85.83% and 86.67% using only phone sensors, only cars sensors and combined car and phone sensors respectively.

This paper is published at IUI’16 Proceedings of the 21th International Conference on Intelligent User Interfaces .(Acceptance Rate: 24%) [Paper]

Beyond the Touchscreen: An Exploration of Extending Interactions on Commodity Smartphones

gesturesTiiSThis paper is an extended version of BeyondTouch(IUI’15 )[Paper].

Most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone); however, the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional inputs on the case and the surface adjacent to the smartphone. Although most of these interactions are implemented with machine learning methods, compact and robust rule-based detection methods can also be applied for recognizing some interactions by analyzing physical characteristics of tapping events on the phone. This article is an extended version of Zhang et al. [2015], which solely covered gestures implemented by machine learning methods. We extended our previous work by adding gestures implemented with rule-based methods, which works well with different users across devices without collecting any training data. We outline the implementation of both machine learning and rule-based methods for these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios and compare the two different implementation methods.

This paper is accepted to TiiS  ACM Transactions on Interactive Intelligent Systems.[Paper]

BeyondTouch: Extending the Input Language with Built-in Sensors on Commodity Smartphones

356.pic_hdWhile most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone), the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone. We outline the implementation of these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios.

This work was published on the Doctoral School of Ubicomp’13 [Paper] and  IUI ’15 Proceedings of the 20th International Conference on Intelligent User Interfaces(Acceptance Rate 23%). [Paper] [Video] .

Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

This paper was published on IUI’15 Proceedings of the 20th International Conference on Intelligent User Interfaces and received BEST SHORT PAPER AWARD(1%). [Paper]

Instant Inkjet Circuits: Lab-based Inkjet Printing to Support Rapid Prototyping of UbiComp Devices

This project introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of ‘instant inkjet circuits’ is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built.

This work was published on UbiComp ’13 Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing and received the BEST PAPER AWARD(1%). [Paper]

BackTap: Robust Four-Point Tapping on the Back of an Off-the-shelf Smartphone

BackTap is an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user’s pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.

This work was published on UIST ’13 Adjunct Proceedings of the adjunct publication of the 26th annual ACM symposium on User interface software and technology[Paper][Video].

PinkyMenu: Gesture with Pinky Finger Based Sketching Application for Tablet Computers

PinkyMenu explored using pinky finger to assist pen-based interaction on tablet.  A two-level menu was built in a painting application was built to demonstrate the usage of the novel gestures. A user can use the pinky finger to slide horizontally or vertically to control the two-level menu.  The major contribution of this tool is to explore the use of the dominant hand assisting with pen interaction to create a new way of developing menus and navigation systems without the use of a traditional menu and the non-dominant hand.

This project is a class project in CS6456 (Principles of User Interface Software) taught by Keith Edwards in Georgia Tech.[Report] [Video]

CoolMag: A Tangible Interaction Tool to Customize Instruments for Children in Music Education

CoolMag, a tangible interaction tool to enable children to create different instruments collaboratively in music education. With CoolMag, children could learn the basic playing methods of different instruments. It also has the potential to inspire children s creativity, because children could adopt objects in daily life (broom, cup, pen etc.) as the carrier of their novel instruments whose appearance may differ from the traditional one.

Published at UbiComp ’11 Proceedings of the 13th international conference on Ubiquitous computing [Paper]

T-Maze: A Tangible Programming Tool for Children

T-Maze is  a tangible programming tool for children aged 5 to 9. Children could use T-Maze to create their own maze maps and complete some maze escaping tasks by the tangible programming blocks and sensors. T-Maze uses a camera to, in real-time, catch the programming sequence of the wooden blocks’ arrangement, which will be used to analyze the semantic correctness and enable the children to receive feedbacks immediately. And children could join in the game by controlling the sensors during program’s running. A user study shows that T-Maze is an interesting programming approach for children and easy to learn and use.

Published at IDC’2011 (Acceptance Rate 30%). This paper was the first full paper from China published on the ACM SIGCHI Interaction Design and Children. Cheng is the only student author.  [Paper]