Viewing a single comment thread. View all comments

littlebitsofspider t1_jebr74v wrote

If you like flicker artifacts and bouncing around gaze vergence, sure. If you want stereo fusion, good edge detection and depth estimation, and smooth saccades, you'll probably want an event camera. Ideally, you'd have a robot eyeball with a beamsplitter inside with an event camera sensor grid on one side, and a traditional camera-CCD sensor on the other. That way, an event at one of the event camera pixels can trigger a pixel dump on the corresponding CCD color subpixels. Better yet would be a Foveon-style stacked-RGB CCD, which could match 1:1 resolution with the event camera in color. You could do the sensor fusion on a dedicated ASIC hooked up to both cameras and let them both do what they do best.

7

BanzoClaymore t1_jebtlch wrote

Did you really think I would know what you were talking about when you wrote this?

8