A game controller that sold 4 million units in its first six weeks may seem like an unlikely prospect for an enterprise...
computing application. But since Microsoft Kinect launched in November, computer programmers have purchased units for applications in robotics, videoconferencing, image processing, augmented reality systems, 3-D rendering and other corporate uses.
Why all this fuss over a toy (albeit a sophisticated one)?
Well, for about $150, you get a USB peripheral with a three-axis accelerometer -- a technology that detects motion in any dimension. In addition, Kinect has a controllable motor to tilt its entire housing up and down, as well as four microphones and several video cameras. Also included is a complex infrared-depth system that uses a matrix of small infrared dots that are projected by the Kinect and then recorded by the cameras.
At Kinect’s core is the one application everyone wants on their desktops: a Minority Report-style 3-D user interface that they can manipulate by swiping hands in front of them. It takes the iPod/iPad swiping motion and literally puts it into thin air. This, however, might be more difficult to accomplish with general desktop apps, and getting a standard set of gestures for the average desktop will also be daunting.
"There are no universal standards for gestural interactions yet. [It] is a problem in its own right, because the [Kinect] cannot rely on learned behavior," blogged Jakob Nielsen, a user interface expert.
For example, the simple "back" command -- returning to a previous menu tree or undoing a particular operation -- isn't consistently implemented in the Kinect games on the market. Sometimes, to go back, you have to point at a particular portion of the screen; at other times, it is accomplished with a complex series of gestures.
The other issue is that most desktops don't have video cameras as part of their standard gear. Many laptops have a single camera, which might not be sufficient for tracking anything beyond a simple gesture.
The activity around Kinect was sparked by several thousand dollars offered as prize purses for separate hacking challenges issued by Adafruit and Matt Cutts. Both received hundreds of submissions and chose a winner in less than a week.
These contests have motivated programmers developing Linux and other open source tools to manipulate the various Kinect data streams. The OpenKinect Google Group has more than 1,400 members in addition to active discussions on how to build upon the coding already done.
CNET has a series of shaky videos -- one of which has been viewed more than 850,000 times -- that shows what programmers have accomplished so far. There are Kinect-based controls for a variety of things, including Christmas tree lights, computer puppetry, object recognition and Star Wars lightsaber simulation.
And it isn't just garage programmers.
Cutts, who self-funded one of the hacking challenges, is a Google engineer. And according to the New Scientist, computer scientist Dieter Fox and his colleagues at Intel Labs Seattle have developed a method for building 3-D models of the interior of buildings with Kinect.
While Microsoft wasn't initially thrilled about these programming activities, it has realized that it needs to be more tolerant of such hacks. There is no word yet on whether Microsoft will release any official software development kits. (The "unofficial SDK" can be found here.) In addition, there are rumors that the next version of Windows will include facial recognition software to authenticate a user, hopefully improving on the extremely buggy feature in earlier Lenovo laptops. There is also work under way to tag people in particular Facebook photos, and a number of facial recognition SDKs have emerged from third party developers such as Luxand's FaceSDK.
This initial burst of activity around Kinect could be the start of something much bigger. More applications may emerge as programmers share their knowledge on the OpenKinect forums and elsewhere, and as corporate and academic researchers gain more experience. Expect to see multicamera laptops in the future that begin to incorporate some of the Kinect ideas, as well as tablet-based apps that can better deal with swipes and other gestures.
ABOUT THE AUTHOR
David Strom is a freelance writer and professional speaker based in St. Louis. He is former editor in chief of TomsHardware.com, Network Computing magazine and DigitalLanding.com. Read more from Strom at Strominator.com.