When Pranav Mistry first showcased “Sixth Sense” the power of gestural computing, we knew the industry is getting serious about new ways to interact with computers. Rather than using mouse and keyboard, why don’t we make it possible to do it all with gesture of our hands.
The idea is prevailing. Where on one side Apple may be already doing something with it in iPhone 4G, we see new prototypes which can practice next generation gestures. Now, MIT’s Media Lab has come up with a revolutionary interface that allows users to manipulate on-screen images with the wave of their hand.
The MIT’s bi-directional display interface (call it “BiDi”) screen is capable of capturing both touch and off-screen gestures through the use of embedded optical sensors. The resultant design can be ultra thin (thinner than LCD/LED TVs) at fraction of the costs.
“The BiDi Screen uses a sensor layer, separated a small distance from a normal LCD display. A mask image is then displayed on the LCD. When the bare sensor layer views the world through the mask, information about the distance to objects in front of the screen can be captured and decoded by a computer.”
BiDi screen takes a different approach to spatial tracking. The system can be incorporated into a “thin LCD device” like a cellphone and it does not require the use of cameras, lenses, projectors or special gloves, unlike Sixth sense, which makes it truly portable.
For more on the project checkout MIT’s project website and Subscribe to us below: