The eyeCharm product, if funded, will clip onto a Microsoft Kinect, converting it from a room-gazing motion tracker to a face-gazing eye tracker; no small feat. The resulting device will let you control your PC with your eyes, according to the campaign.
As usual, I wrote a full response where a short comment is customary. Ho hum.
Beware of the Oversell 🙂 This looks like a great product for end-users and HCI researchers. From the research side, this could really reduce the costs and hassles associated with the old-school eye track rigs. I don’t know how this device’s sampling rate and resolution compares to the head-mounted rigs, but this is much less obtrusive to the research subject and less likely to add confounding variables to the results. If this device becomes a common tool for researchers, it could make data from different regions, different labs, different investigators, and different participants more readily comparable by reducing the amount of variability in the setup.
There’s much more to the end-user market than gaze-based control, however. This product has great potential for people with different physical abilities; people coping with ALS (Lou Gehrig’s Disease), for example, usually retain positive motor control over their eyes despite losing control of their limbs. In that case, users have fewer viable control options. The general user population would have less to gain by using this as a pure controller; eye movement is a more restrictive way to make unambiguous command signals… A mouse allows you to quickly select individual pixels, whereas gaze lets you focus on a larger high-probability target area that can be made more precise at the cost of increased gaze time or additional signals. I wouldn’t choose to type documents by eye tracker if I could use the keyboard in front of me.
It would make more sense to use this tracker in concert with other input devices, and also for non-control signalling. I hate zooming using the mousewheel; this tracker could detect a slight squinting of my eyes and cause the display to zoom-in. What about using gaze in a search interface to quickly determine which results are least interesting and using that information to improve follow-up searches without having to manually add other search terms?
There are oodles of augmentative possibilities, here. Thanks for the post!To do: link original thread