Section Logo
SHARE THIS  
  
Facebook   Twitter   LinkedIn   Email  

Page1of 2
< 1 2 >
You the Interface
3-D Gesture Technology Takes Off

by Joe Shepter

Swathed in sheer curtains, the futuristic room at Organic Motion stands in sharp contrast to the rest of the office, which features startup-chic tabletops set on yellow sawhorses. Andrew Tschesnok, the company’s youthful CEO, steps inside and holds his hands out. Fourteen cameras lock onto his body. He moves, and on a flat screen at the front of the cage, a buxom lizard moves with him. He waves his arms, it waves. He turns and shrugs his shoulders. It shrugs in precisely the same way. He puts his hands together and moves them apart. It tosses a fireball in my direction.

“The system can accurately measure your skeleton within a few millimeters,” he says.


Markerless motion capture by Organic Motion at the SONY Wonder Technology Lab (SWTL) in Manhattan. Image courtesy of SWTL.

Organic Motion may not be one of the better-known players for 3-D gestural interfaces, but its“markerless motion capture” system is easily one of the more elaborate and precise. It can track every movement of a person—down to the fingers. Clients have used it to create everything from military simulators to interactive dance exhibitions.

If you ask Tschesnok, and many other technology makers, 3-D gestural interfaces are the future. More than eight million of us can already move around our living rooms and play games with Microsoft Kinect. Within a year, we’ll be able to buy gesture-controlled computers, dockable 3-D interfaces for devices like the iPad, and maybe even TVs equipped with 3-D cameras.

“I think in five years time, when you buy a computer, you’ll get this [3-D gestural interfaces],” John Underkoffler, the chief scientist of Oblong Industries, Inc. recently told a TED audience.

As futuristic as it all seems, technology like this has been around much longer than you might think. A successful experiment at the MIT Media Lab called “Put That There” appeared as early as 1980. It involved pointing at a map and requesting that ships be placed in a variety of locations. It was followed by a large number of academic systems and experiments that used everything from infrared and stereoscopic cameras to eye-tracking devices and glove-reading systems.

Oddly enough, it took a Hollywood movie to generate popular interest in the field. In 2002, Alex McDowell, the production designer on Minority Report, hired Underkoffler, then at MIT, to abandon thinking about real-world limitations, and instead describe a gestural interface from the year 2054. In a sense, it was an R&D project that sold the public (or at least a substantial proportion of its geeks) on the concept.

“When that movie came out, every client wanted the interface in Minority Report,” says Dave Small, principal of Small Design Firm, which specializes in digital displays.

Underkoffler only made one mistake. The technology would arrive much faster than 2054. At the high end, you have companies like Organic Motion, whose product is not really a dedicated interface system, though it does deliver motion-capture data that can be used for that purpose. At the extreme other end is 3Gear Systems, a MIT project that has prototyped a whimsically-colored set of Lycra gloves that work with any ordinary Webcam. Theoretically, the gloves would cost about $1 to produce.

http://image.commarts.com/Images1/5/8/3/38586_54_0_MTYyNTQ2OTg1MTI4ODQwMzI1Nw.jpgJoe Shepter
Joe Shepter is a freelance writer specializing in travel and interactive media. He has worked with Adobe, Oracle, Whirlpool and Coca-Cola, among others.