Archive for June, 2008

Robotic Control with the Nintendo Wiimote

In the last post I discussed a gestural control system using the Nintendo Wiimote. An interesting application of this system is for controlling unmanned vehicles. My work with the ACTIVE lab (Applied Cognition and Training in Immersive Virtual Environments) often involves research in robotics, and a colleague and I have just finished a paper on using the Wiimote as the input device. There’s quite a bit of interesting technology behind the scenes here, but I think I’ll let the couple of videos below do most of the talking. Here’s a screenshot of the GUI that’s running on my laptop (but isn’t really visible in the videos):

The GUI provides telemetry for latitude, longitude, altitude, speed, heading, and streaming video. The software also allows the user to connect to each of these services independently and provides appropriate feedback on which services are active.

Wiimote Gesture Recognition

The Nintendo Wiimote is a remarkable, inexpensive device for exploring 3D user interaction. With 3 accelerometers, the Wiimote allows motion control and communicates wirelessly using the BlueTooth communication protocol. Open-source libraries (such as Brian Peek’s Wiimote.NET library) allow access to the Wiimote from a PC.

Many of the games on the Wii, use the Wiimote in a very simplistic way — mainly looking for thresholds on an axis to indicate an action. You can see this behavior if, for example, you swing the Wiimote above your head and maybe the golf club still swings as if you’d moved the controller correctly. Accurate gesture control — precise recognition of hand/arm movement — has been largely missing from most Wii games.

A colleague and I have devised a set of 29 features that can be used to classify the stream of Wiimote XYZ accelerometer values into a recognizable motion. Using a machine learning algorithm, we can train an arbitrary set of gestures which can then be mapped to actions.

In the video below, I present a graphical user interface for training and classifying gestures, providing output through sound, animation, and even a tactile belt. Importantly, the underlying recognition system is an API — allowing pre-trained data to be used by any application. I’ve used the system now in a couple of XNA video games (in fact the animation in the demo here is presented through XNA), in a Windows app, and even to control a robot (future post).