Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with the implementation demonstrated in the video is that the gesture is recognised, and then the action happens.

There is no one-to-one mapping between the gesture in progress and action on screen, so I suspect it will not feel anywhere near as nice as a trackpad or touchscreen.



I think that could be improved with a higher resolution sensor, but the question is whether you could build such a thing while retaining space for physical keys.

IIRC, current gesture recognition still takes a couple hundred milliseconds on modern consumer touch screens (iPhones, etc.), especially for the "click" action.


Couldn't you make the keys transparent and put the IR sensors below them?


A lot of plastic is already IR transparent but totally opaque in the visible spectrum. You can hack a cheap camera and take out the IR filter to make a neat "xray camera".


There are even so-called "Black-Ops Plastics" specifically designed to be fully IR transparent: http://qwonn.com/black-ops-plastics.html


Maybe you should do that and write a paper about it. :)

My initial thoughts: 1) is there space underneath the key for such a sensor? 2) would the infrared light bounce off of the plastic on the key? 3) would transparent keys affect chicken-peckers (and other users) adversely?

Not saying it's not possible, although I imagine there will be challenges.


Should be able to be improved once its in use and they have enough usage reports. For now I'm guessing there is too many false positives during typing itself, eg moving your hand toward the mouse or something else. There is also all those peckers out there to think about doing weird and strange hovering motions. In the future, if they can map out enough false positives they could potentially fix it.

Also, I'm sure someone with this tech could make use of a certain gesture that activates one-to-one style mapping for a certain period of time / until a button is pressed.


For that matter even with Touch Screen, a gesture is extracted/identified by touch Algorithms (either in Touch controller or SoC). Only later, an action is taken. So there will defenitely be an inherent delay. These algos involve similar classification methods as mentioned in this demo.


But at least on a touch screen I can perform a partial gesture and see a partial result — i.e., the feeling is that the image zooms as my fingers pinch the screen, or the map pans as my fingers move. It's a very different feeling to the approach demonstrated in the video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: