Projects

I've put up a collection of the projects I've worked on, partly to keep track of things for myself, but other people may find something interesting in them as well. I've left out details on a handful of projects, as they have some novel bits that should probably be published first. The projects are organized roughly by 'time period' (PhD, MSc, BSc):


PhD:

YouMove: (Or - the most fun you can have in pyjamas)

This was a really cool project I did with Tovi, Justin and George at Autodesk Research. We built a giant interactive mirror that could be used to train people how to perform new movements. I got to use LEGO, servo motors, build a big fort out of bed sheets and had people dress up in pyjamas and learn ballet. It wasn't just fun, I'm really proud of the output. The recording software is flexible and simple, we made some real contributions to interacting with augmented reality mirrors, and the training software actually works. It used knowledge from coaches (ballet and yoga instructors) as well as from the motor learning literature to provide proper feedback.




Thermal printer photo booth

What's more fun than a pixelated version of your face printed in terrible quality on a thermal receipt printer? Nothing. By throwing a webcam, button and thermal printer on a Raspberry Pi, I built a little photo booth that prints you your picture along with an ID so you can retrieve a higher-res/color version later. Hopefully I'll have time to write up a little how-to later, but the short version is:  use 'streamer' to capture from the webcam and use info from this blog to poke at the thermal printer.







Artificial Human Project

The goal of this project was to fully simulate a human (as judged by a PIR motion sensor). To prevent lights from automatically turning off when Michelle didn't move for a few minutes in her office, I built a little duct-tape flag on an Arduino-driven servo motor that swung around every 15 minutes. While this wouldn't pass a turing test, it's a totally convincing person to the motion sensor.




Gesture Learning

Gestures have the potential to be an efficient and expressive means of communication. They're well suited to mobile phones and other touch-screens that lack a separate physical input space. Current gestures are pretty simple, and pretty limited. To reach their full potential, I think gestures are going to have to become more complex and require learning (both in learning the pairing, and the motor performance). The first study I ran on this topic showed how the type of guide used in learning can help or hinder learning, and suggested new methods for evaluating gesture learning.



Scanpath Analysis

Analyzing eye movements is a pretty effective way to understand where people are focussing their attention. It's also useful for understanding how the brain works, and how we perceive and understand the world. However, it's pretty difficult to compare people's eye movements, or even the eye movements of the same person looking at the same scene. Working with people at the BAR lab at UBC, I implemented some methods to compare eye movements, recorded data using the Dikabilis system, and some other eye related things.


Tabletop Therapy Evaluation

This is a project done with Michelle to compare interactive surface rehabilitation tasks to traditional tasks. We suited up some participants in motion capture gear, and stuck some EMG electrodes on their dominant arm to record how they moved. We then had them use a few of the games Michelle had built for surface, as well as do some static 'touch the sticker' type tasks that are currently used in therapy. We found that the design of the activity was far more important than the use of technology when considering movement patterns. The work is published at CHI (paper, poster).

Shippy

This game had simple mechanics, but had some fairly complex (Python) coding behind it. It was built for Psychology researchers at UBC who wanted to study participant behaviour while interacting with video games. To be useful, it had to be very customizable and easy to use (adding lag to the controls, switching buttons, change direction of play, speed, difficulty, etc). Each parameter could be changed at specific points in time during gameplay as well, e.g., at one minute in, swap the left and right keys and slow it down. It also recorded synchronized video of the participant and the game screen.




Kinect Physics Table

This project seems a bit dated already (done for my signal processing class in Nov. 2010) because of how quickly development with the Kinect has taken off. The system had an FTIR table with a Kinect pointing at the table surface from above. Users could put arbitrary physical objects on the surface, and from the Kinect's depth map a contour was extracted. The contours of the objects were fed into a Box2D simulation, allowing virtual objects to physically interact with the real-world objects. It was kind of like a really coarse virtual wind tunnel.

 

QuickSort Outreach App

This is a project I worked on with Michelle for the Canadian Information Processing Society. The idea was to create something that would help get kids interested in computing science. What we ended up with is a touch-screen application that runs on an all-in-one computer that is meant to be displayed at conferences/open houses etc. The application challenges users to compete with a computer to see if they can sort objects faster than a computer. The user can choose different speeds, data sets, and sorting algorithms, and each choice has some witty-computer-banter along with some interesting facts. I did a lot of work on the back-end of this, writing the sorting algorithms (had to write my own stack/threading to fit into the Flex Timer framework). It was a nice break from the everyday reading/MATLAB grind.

  

ARM Trainer

Before receiving their custom myo-electric (muscle-signal-controlled) prosthetic, amputees are encouraged to train their muscles in preparation. Current methods of pre-training are far from ideal, so I developed a system that aims improve this training process. More details can be found in the ICDVRAT publication (which won best student paper!).

 

Master's:

Evaluation of Surgical Skill

For my Master's thesis project I developed a system to analyze and quantify the skill of a surgeon. Motion sensors on the surgeon and force/torque sensors on the instrument record the surgeon's movements and provide a way to compare experts and novices. After designing the system, running the experiments, and staring at graphs for months, we finally had something. A couple of new methods to evaluate skill based on the curvature of the trajectory and the total energy, as well as a novel system design are fully detailed in my thesis (there's also some pretty graphs).



 


Bilateral Control with Four PHaNTOM Omnis

Working with Minh-tu Pham, a visiting professor from France, I helped develop the software to make two pairs haptic devices mirror each other. It was using fairly simple PID control, with some low-pass filtering to remove oscillations. The purpose was to see if we could improve novice/expert surgical training by having novices directly feel the expert's hand movements, and vice-versa.

 

Virtual Equine Assisted Therapy

This was a very fun project that died too soon. The idea was to use a robotic platform and a bunch of fancy VR hardware to simulate virtual horseback riding for therapeutic purposes. The idea would be that the robotic environment would be much more controllable, allowing a wider range of patients to benefit from EAT and to better study the therapy. Due to time, equipment, and a handful of other factors, we couldn't move forward with this project, but I still think it's very cool.



Wiihabilitation Games and Assessment

This is work done with Michelle on developing rehabilitation-specific games and applications using Nintendo Wii peripherals. Therapists are currently using the Nintendo Wii for physical/occupational therapy, but the commercial games are not well suited for this purpose. Our system has a number of games that have configurable difficulty, progress monitoring, feedback, and other things useful for therapy. More details on the system can be found here.


Olfactory Interface for Virtual Reality

Our virtual worlds are not smelly enough. To correct that problem, a WISEST 2010 student developed a low cost olfactory interface to deliver scents to the user. Using PC fans controlled by a Phidget relay interface kit, a program can turn the device to turn various scents 'on' and 'off'. The student also developed a large virtual environment using Google sketchup, and Vizard, and also ran some experiments investigating the application of smell illusions in virtual environments. This project is the start of work to explore the limits of visual capture in virtual reality.


e-Monkey

This was a WISEST 2010 project to construct an interactive toy that I helped develop/supervise with Michelle. A Build-a-bear monkey was unstuffed, and filled with sensors, lights, actuators, etc. The electrical widgets all run to a couple of Arduinos that control the lil guy. This project helped the WISEST student (and Michelle and I) learn about Arduino's, as well as HCI and a few electrical bits. The intended use of the Monk-e will be disclosed after publication!


Interactive Floor

Low-resolution interactive floors are pretty common now, using fancy IR cameras and other somewhat-esoteric hardware. To increase the potential of these devices, we wanted a higher resolution, cheaper version. Working with a team of undergraduates in the HCI course, and later more closely with Timothy, we developed and deployed a a low-cost, high resolution system that can track foot movements in 3D. The system uses off the shelf wiimotes and infrared emitters to track foot position. Will add more info as it becomes disclosable!



Emotional Speech

For a linguistics course I wrote some software that analyzed the emotional content in speech. Based on the pitch, volume and speech rate, the system could identify the arousal and valence levels in the speech with around 12% error. Not quite good enough for publication, but I'm workin' on it. I really liked this project, as it let me play in affective computing a bit, an area I'll always find very interesting.

Morphing Point-based Models

Instead of using those pesky little polygons to represent 3D objects, we can use a cloud of points instead. For my graphics course project, I wrote a program that takes two 3D models represented as point clouds and morphs one into the other. Below is a bent object (left) morphing into the original (right), which would be useful in analyzing mechanical deformations and other fun things.




Undergrad:

Hungry Hungry Hippos

CMPUT 411 (Computer Graphics) requires that students build a game to show their understanding of programming in OpenGL. I built Hungry Hungry Hippos from scratch, designing the models in Maya, and programming the game in C++ using the OpenGL libraries. The gameplay is pretty simplistic, but there are enough shiny things to keep someone interested for a few minutes.



CyberCell Interaction.

This is work done as part of my NSERC USRA position in 2008. The goal of this work is to add computational steering to the CyberCell simulation of an e. coli cell. This means that not only can you see the reactions between molecules in a simulated e. coli cell, but you can interact with the molecules, move them around as the simulation is running, and see the results of your interaction. During my service oriented systems course, I added a web-service layer that let me abstract the visualization from the computation, allowing particle-based simulations to be visualized by an arbitrary client that uses the interface.







Using Wikipedia for the Netflix Challenge

This was originally a course project for CMPUT 466 (Machine Learning) that showed promise, and was worked on outside of class. The goal of the project was to use the Wikipedia articles about movies to help predict how well a user would enjoy a given movie. We were able to improve existing predictions, but by a very unimpressive amount. Nevertheless, it got me my first publication!

Mousetrap Monitoring Using a Sensor Network

This was work done as part of my 2007 summer research with Ioanis Nikolaidis and Nick Boers. For this project, some small, low-power microcontrollers capable of wireless communication were connected to mousetraps. When the trap was activated, a signal was sent back to a wireless router which recorded the event and could notify the appropriate personnel of the event. This sort of framework can be used to monitor the status of larger animal traps to eliminate the need for manual inspection of traps in remote areas.

WiiSega

The buttons on my old Sega Genesis controllers are a bit hard to push, so I thought it would be "cool" to replace them with a Wiimote. So, after collecting a wiimote, a phidget, a handful of opto-couplers and some wire I have ... WiiSonic the WiiHedgehog on my WiiSega (movie)!